NASA Astrophysics Data System (ADS)
Gersch, W.; Brotherton, T.; Braun, S.
1980-04-01
A multiple input/scalar output stationary time series identification problem is considered from a parametric model time domain point of view. Particular emphasis is on the source identification problem. Closed form formula estimates of the individual source power contributions are expressed in terms of sample correlations that are obtained from the observed input and output time series and from parametric models fitted to that data. The estimates of the noise power contributions are asymptotically jointly normally distributed. The mean values and covariance matrix of those estimates yield confidence interval estimates of the individual and joint power contributions. The motivation for developing a rational polynomial transfer function or ARMA model of the multi-input scalar output plus additive noise situation is given. A two correlated input/single output version of this model is considered for a Monte Carlo simulation study. Parametric ARMA and approximate AR models are fitted to the simulated data. The asymptotic normality, and the distribution of the mean and covariances of the source power contribution computed from the ARMA and AR models are appraised. Several facets of the relative performance of windowed periodogram and AR model spectral analysis are examined for the multiple input/scalar output identification problem. The points that are emphasized are that conventional windowed periodogram spectral analysis is subjective, not particularly satisfactory for the sharp spectral peak situation that is commonly encountered in vibration data analysis and very likely not as good as "objective" Akaike criterion order AR modelled spectral analysis.
Parametric time-series analysis of daily air pollutants of city of Shumen, Bulgaria
NASA Astrophysics Data System (ADS)
Ivanov, A.; Voynikova, D.; Gocheva-Ilieva, S.; Boyadzhiev, D.
2012-10-01
The urban air pollution is one of the main factors determining the ambient air quality, which affects on the human health and the environment. In this paper parametric time series models are obtained for studying the distribution over time of primary pollutants as sulphur and nitrogen oxides, particulate matter and a secondary pollutant ground level ozon in the town of Shumen, Bulgaria. The methods of factor analysis and ARIMA are used to carry out the time series analysis based on hourly average data in 2011 and first quarter of 2012. The constructed models are applied for a short-term air pollution forecasting. The results are estimated on the basis of national and European regulation indices. The sources of pollutants in the region and their harmful effects on human health are also discussed.
One-stage parametric meta-analysis of time-to-event outcomes
Siannis, F; Barrett, J K; Farewell, V T; Tierney, J F
2010-01-01
Methodology for the meta-analysis of individual patient data with survival end-points is proposed. Motivated by questions about the reliance on hazard ratios as summary measures of treatment effects, a parametric approach is considered and percentile ratios are introduced as an alternative to hazard ratios. The generalized log-gamma model, which includes many common time-to-event distributions as special cases, is discussed in detail. Likelihood inference for percentile ratios is outlined. The proposed methodology is used for a meta-analysis of glioma data that was one of the studies which motivated this work. A simulation study exploring the validity of the proposed methodology is available electronically. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20963770
NASA Astrophysics Data System (ADS)
Jiang, Shifeng; Treps, Nicolas; Fabre, Claude
2012-04-01
We present in this paper a general model for determining the quantum properties of the light generated by a synchronously pumped optical parametric oscillator (SPOPO) operating below threshold. This model considers time and frequency on an equal footing, which allows us to find new quantum properties, related for example to the carrier envelope offset (CEO) phase, and to consider situations that are close to real experiments. We show that, in addition to multimode squeezing in the so-called ‘supermodes’, the system exhibits quadrature entanglement between frequency combs of opposite CEO phases. We have also determined the quantum properties of the individual pulses and their quantum correlations with the neighboring pulses. Finally, we determine the quantum Cramer-Rao limit for an ultra-short time delay measurement using a given number of pulses generated by the SPOPO.
NASA Astrophysics Data System (ADS)
Choi, Jongseong
The performance of a hypersonic flight vehicle will depend on existing materials and fuels; this work presents the performance of the ideal scramjet engine for three different combustion chamber materials and three different candidate fuels. Engine performance is explored by parametric cycle analysis for the ideal scramjet as a function of material maximum service temperature and the lower heating value of jet engine fuels. The thermodynamic analysis is based on the Brayton cycle as similarly employed in describing the performance of the ramjet, turbojet, and fanjet ideal engines. The objective of this work is to explore material operating temperatures and fuel possibilities for the combustion chamber of a scramjet propulsion system to show how they relate to scramjet performance and the seven scramjet engine parameters: specific thrust, fuel-to-air ratio, thrust-specific fuel consumption, thermal efficiency, propulsive efficiency, overall efficiency, and thrust flux. The information presented in this work has not been done by others in the scientific literature. This work yields simple algebraic equations for scramjet performance which are similar to that of the ideal ramjet, ideal turbojet and ideal turbofan engines.
Parametric analysis of ATT configurations.
NASA Technical Reports Server (NTRS)
Lange, R. H.
1972-01-01
This paper describes the results of a Lockheed parametric analysis of the performance, environmental factors, and economics of an advanced commercial transport envisioned for operation in the post-1985 time period. The design parameters investigated include cruise speeds from Mach 0.85 to Mach 1.0, passenger capacities from 200 to 500, ranges of 2800 to 5500 nautical miles, and noise level criteria. NASA high performance configurations and alternate configurations are operated over domestic and international route structures. Indirect and direct costs and return on investment are determined for approximately 40 candidate aircraft configurations. The candidate configurations are input to an aircraft sizing and performance program which includes a subroutine for noise criteria. Comparisons are made between preferred configurations on the basis of maximum return on investment as a function of payload, range, and design cruise speed.
Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.
2012-01-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Buchhave, Lars A.; Collaboration: Kepler Science Team; and others
2012-05-10
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Robustness analysis for real parametric uncertainty
NASA Technical Reports Server (NTRS)
Sideris, Athanasios
1989-01-01
Some key results in the literature in the area of robustness analysis for linear feedback systems with structured model uncertainty are reviewed. Some new results are given. Model uncertainty is described as a combination of real uncertain parameters and norm bounded unmodeled dynamics. Here the focus is on the case of parametric uncertainty. An elementary and unified derivation of the celebrated theorem of Kharitonov and the Edge Theorem is presented. Next, an algorithmic approach for robustness analysis in the cases of multilinear and polynomic parametric uncertainty (i.e., the closed loop characteristic polynomial depends multilinearly and polynomially respectively on the parameters) is given. The latter cases are most important from practical considerations. Some novel modifications in this algorithm which result in a procedure of polynomial time behavior in the number of uncertain parameters is outlined. Finally, it is shown how the more general problem of robustness analysis for combined parametric and dynamic (i.e., unmodeled dynamics) uncertainty can be reduced to the case of polynomic parametric uncertainty, and thus be solved by means of the algorithm.
Quantum finite time availability for parametric oscillators
NASA Astrophysics Data System (ADS)
Hoffmann, Karl Heinz; Schmidt, Kim; Salamon, Peter
2015-06-01
The availability of a thermodynamic system out of equilibrium with its environment describes its ability to perform work in a reversible process which brings it to equilibrium with this environment. Processes in finite time can usually not be performed reversibly thus leading to unavoidable losses. In order to account for these losses the concept of finite time availability was introduced. We here add a new feature through the introduction of quantum finite time availability for an ensemble of parametric oscillators. For such systems there exists a certain critical time, the FEAT time. Quantum finite time availability quantifies the available work from processes which are shorter than the FEAT time of the oscillator ensemble.
Parametric instabilities in picosecond time scales
Baldis, H.A.; Rozmus, W.; Labaune, C.; Mounaix, Ph.; Pesme, D.; Baton, S.; Tikhonchuk, V.T.
1993-03-01
The coupling of intense laser light with plasmas is a rich field of plasma physics, with many applications. Among these are inertial confinement fusion (ICF), x-ray lasers, particle acceleration, and x-ray sources. Parametric instabilities have been studied for many years because of their importance to ICF; with laser pulses with duration of approximately a nanosecond, and laser intensities in the range 10{sup 14}--10{sup 15}W/cm{sup 2} these instabilities are of crucial concern because of a number of detrimental effects. Although the laser pulse duration of interest for these studies are relatively long, it has been evident in the past years that to reach an understanding of these instabilities requires their characterization and analysis in picosecond time scales. At the laser intensities of interest, the growth rate for stimulated Brillouin scattering (SBS) is of the order of picoseconds, and of an order of magnitude shorter for stimulated Raman scattering (SRS). In this paper the authors discuss SBS and SRS in the context of their evolution in picosecond time scales. They describe the fundamental concepts associated with their growth and saturation, and recent work on the nonlinear treatment required for the modeling of these instabilities at high laser intensities.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; Ostien, Jakob T.; Lai, Zhengshou
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
Analysis of parametric transformer with rectifier load
Ichinokura, O.; Jinzenji, T. ); Tajima, K. )
1993-03-01
This paper describes a push-pull parametric transformer constructed using a pair of orthogonal-cores. The operating characteristics of the parametric transformer with a rectifier load were analyzed based on SPICE simulations. The analysis results show good agreement with experiment. It was found that the input surge current of the full-wave rectifier circuit with a smoothing capacitor can be compensated by the parametric transformer. Use of the parametric transformer as a power stabilizer is anticipated owing to its various functions such as for voltage regulation and overload protection.
Parametric phase diffusion analysis of irregular oscillations
NASA Astrophysics Data System (ADS)
Schwabedal, Justus T. C.
2014-09-01
Parametric phase diffusion analysis (ΦDA), a method to determine variability of irregular oscillations, is presented. ΦDA is formulated as an analysis technique for sequences of Poincaré return times found in numerous applications. The method is unbiased by the arbitrary choice of Poincaré section, i.e. isophase, which causes a spurious component in the Poincaré return times. Other return-time variability measures can be biased drastically by these spurious return times, as shown for the Fano factor of chaotic oscillations in the Rössler system. The empirical use of ΦDA is demonstrated in an application to heart rate data from the Fantasia Database, for which ΦDA parameters successfully classify heart rate variability into groups of age and gender.
Macchiato, M. ); Serio, C. ); Lapenna, V. ); Rotonda, L.La. )
1993-07-01
The statistical analysis of cold air temperatures (cold spells) and hot air temperatures (hot spells) is discussed. Air temperature time series observed at 50 stations in southern Italy are investigated. The deterministic and stochastic components of the time series are identified and described by a dynamic-stochastic model that is periodic in the deterministic part (the annual cycle) and Markovian (first-order autoregressive) in the stochastic part. The annual cycle is described by only a few Fourier coefficients. Based on the model fitted to the data, the theoretical probability of cold (hot) spells is computed and compared to that estimated from the observed data. Spatial patterns of identified that make it possible to extrapolate the probability of cold (hot) spells at locations where no direct observations are available. 19 refs., 13 figs., 2 tabs.
Meadows, Cheyney; Rajala-Schultz, Päivi J; Frazer, Grant S; Phillips, Gary; Meiring, Richard W; Hoblet, Kent H
2007-07-16
The effect of a contract breeding program offered by a breeding co-operative was assessed using parametric frailty models with event-time analysis technique in a field study of Ohio dairies. The program featured tail chalking and daily evaluation of cows for insemination by co-operative technicians; dairy employees no longer handled estrus detection activities. Test day records were obtained between early 2002 and mid-2004 for 16,453 lactations representing 11,398 cows in 31 herds identified as well-managed client herds by the breeding co-operative. Various parametric distributions for event times available in a commercial software (Stata 9.1, College Station, TX) were tested to assess which distribution fit the calving-to-conception data best. After identifying the distribution with the best fit, a full model with potential confounders and other significant predictors of time to pregnancy was developed and then frailty terms were included in the model. Generalized gamma and log-normal distributions fit the data best, but since gamma distribution does not allow the use of frailty effects, log-normal distribution was used in further modeling. Separate accelerated failure time models with frailty terms to account for latent effects at the herd, cow, or lactation level were developed, testing both gamma and inverse Gaussian frailty distributions. In these models, potential confounders and statistically significant predictors were also controlled for, and the association between the contract breeding program and the mean time to pregnancy was characterized using time ratios. The log-normal model identified that interval to pregnancy was associated with breed, herd size, use of ovulation synchronization protocols, parity, calving season and somatic cell score (above or below 4.5) and maximum milk yield prior to pregnancy or censoring. While controlling for these factors, there was a reduction in average time to pregnancy among cows managed under the contract breeding
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
Adaptive time-frequency parametrization of epileptic spikes
NASA Astrophysics Data System (ADS)
Durka, Piotr J.
2004-05-01
Adaptive time-frequency approximations of signals have proven to be a valuable tool in electroencephalogram (EEG) analysis and research, where it is believed that oscillatory phenomena play a crucial role in the brain’s information processing. This paper extends this paradigm to the nonoscillating structures such as the epileptic EEG spikes, and presents the advantages of their parametrization in general terms such as amplitude and half-width. A simple detector of epileptic spikes in the space of these parameters, tested on a limited data set, gives very promising results. It also provides a direct distinction between randomly occurring spikes or spike/wave complexes and rhythmic discharges.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
Parametric cost analysis for advanced energy concepts
Not Available
1983-10-01
This report presents results of an exploratory study to develop parametric cost estimating relationships for advanced fossil-fuel energy systems. The first of two tasks was to develop a standard Cost Chart of Accounts to serve as a basic organizing framework for energy systems cost analysis. The second task included development of selected parametric cost estimating relationships (CERs) for individual elements (or subsystems) of a fossil fuel plant, nominally for the Solvent-Refined Coal (SRC) process. Parametric CERs are presented for the following elements: coal preparation, coal slurry preparation, dissolver (reactor); gasification; oxygen production; acid gas/CO/sub 2/ removal; shift conversion; cryogenic hydrogen recovery; and sulfur removal. While the nominal focus of the study was on the SRC process, each of these elements is found in other fossil fuel processes. Thus, the results of this effort have broader potential application. However, it should also be noted that the CERs presented in this report are based upon a limited data base. Thus, they are applicable over a limited range of values (of the independent variables) and for a limited set of specific technologies (e.g., the gasifier CER is for the multi-train, Koppers-Totzek process). Additional work is required to extend the range of these CERs. 16 figures, 13 tables.
Parametric systems analysis for tandem mirror hybrids
Lee, J.D.; Chapin, D.L.; Chi, J.W.H.
1980-09-01
Fusion fission systems, consisting of fissile producing fusion hybrids combining a tandem mirror fusion driver with various blanket types and net fissile consuming LWR's, have been modeled and analyzed parametrically. Analysis to date indicates that hybrids can be competitive with mined uranium when U/sub 3/O/sub 8/ cost is about 100 $/lb., adding less than 25% to present day cost of power from LWR's. Of the three blanket types considered, uranium fast fission (UFF), thorium fast fission (ThFF), and thorium fission supressed (ThFS), the ThFS blanket has a modest economic advantage under most conditions but has higher support ratios and potential safety advantages under all conditions.
Model reduction for parametric instability analysis in shells conveying fluid
NASA Astrophysics Data System (ADS)
Kochupillai, Jayaraj; Ganesan, N.; Padmanabhan, Chandramouli
2003-05-01
Flexible pipes conveying fluid are often subjected to parametric excitation due to time-periodic flow fluctuations. Such systems are known to exhibit complex instability phenomena such as divergence and coupled-mode flutter. Investigators have typically used weighted residual techniques, to reduce the continuous system model into a discrete model, based on approximation functions with global support, for carrying out stability analysis. While this approach is useful for straight pipes, modelling based on FEM is needed for the study of complicated piping systems, where the approximation functions used are local in support. However, the size of the problem is now significantly larger and for computationally efficient stability analysis, model reduction is necessary. In this paper, model reduction techniques are developed for the analysis of parametric instability in flexible pipes conveying fluids under a mean pressure. It is shown that only those linear transformations which leave the original eigenvalues of the linear time invariant system unchanged are admissible. The numerical technique developed by Friedmann and Hammond (Int. J. Numer. Methods Eng. Efficient 11 (1997) 1117) is used for the stability analysis. One of the key research issues is to establish criteria for deciding the basis vectors essential for an accurate stability analysis. This paper examines this issue in detail and proposes new guidelines for their selection.
Threshold Analysis of a THz-Wave Parametric Oscillator
NASA Astrophysics Data System (ADS)
Li, Zhong-Yang; Yao, Jian-Quan; Zhu, Neng-Nian; Wang, Yu-Ye; Xu, De-Gang
2010-06-01
The parametric gain of a terahertz wave parametric oscillator (TPO) is analyzed. Meanwhile the expression of TPO threshold pump intensity is derived and theoretically analyzed with different factors. The effective interaction length between the pump wave and Stokes wave is calculated, and particular attention is paid to the coupling efficiency of the pump wave and Stokes wave. Such an analysis is useful for the experiments of TPO.
Lottery spending: a non-parametric analysis.
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699
Lottery Spending: A Non-Parametric Analysis
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699
NASA Astrophysics Data System (ADS)
Donnelly, Aoife; Misstear, Bruce; Broderick, Brian
2015-02-01
This paper presents a model for producing real time air quality forecasts with both high accuracy and high computational efficiency. Temporal variations in nitrogen dioxide (NO2) levels and historical correlations between meteorology and NO2 levels are used to estimate air quality 48 h in advance. Non-parametric kernel regression is used to produce linearized factors describing variations in concentrations with wind speed and direction and, furthermore, to produce seasonal and diurnal factors. The basis for the model is a multiple linear regression which uses these factors together with meteorological parameters and persistence as predictors. The model was calibrated at three urban sites and one rural site and the final fitted model achieved R values of between 0.62 and 0.79 for hourly forecasts and between 0.67 and 0.84 for daily maximum forecasts. Model validation using four model evaluation parameters, an index of agreement (IA), the correlation coefficient (R), the fraction of values within a factor of 2 (FAC2) and the fractional bias (FB), yielded good results. The IA for 24 hr forecasts of hourly NO2 was between 0.77 and 0.90 at urban sites and 0.74 at the rural site, while for daily maximum forecasts it was between 0.89 and 0.94 for urban sites and 0.78 for the rural site. R values of up to 0.79 and 0.81 and FAC2 values of 0.84 and 0.96 were observed for hourly and daily maximum predictions, respectively. The model requires only simple input data and very low computational resources. It found to be an accurate and efficient means of producing real time air quality forecasts.
Soil Analysis using the semi-parametric NAA technique
Zamboni, C. B.; Silveira, M. A. G.; Medina, N. H.
2007-10-26
The semi-parametric Neutron Activation Analysis technique, using Au as a flux monitor, was applied to measure element concentrations of Br, Ca, Cl, K, Mn and Na for soil characterization. The results were compared with those using the Instrumental Neutron Activation Analysis technique and they found to be compatible. The viability, advantages, and limitations of using these two analytic methodologies are discussed.
Parametric analysis of cryogenic carbon dioxide cooling of shell eggs.
Sabliov, C M; Farkas, B E; Keener, K M; Curtis, P A
2002-11-01
Parametric analysis of cryogenic cooling of shell eggs was performed using finite element analysis. Two cooling temperatures (-50 and -70 C), three cooling convective heat transfer coefficients (20, 50, and 100 W/ m2K), two equilibration temperatures (7 and 25 C), and two equilibration heat transfer coefficients (0 and 20 W/ m2K) were considered in the analysis. Lower temperatures and higher cooling convective heat transfer coefficients resulted in higher cooling rates and lower final egg temperatures. A chart and equation were developed to identify combinations of processing parameters to yield the desired egg temperature (7 C) at the end of adiabatic equilibration. Results show that a cooling time of 8.2 min was required to reach a final egg temperature of 7 C for a cooling temperature of -50 C and a convective heat transfer coefficient of 20 W/m2K. The cooling time decreased to 2 min when the convective heat transfer coefficient increased to 100 W/m2K, at a cooling temperature of -50 C. Processing at -70 C and 20 W/m2K, required 5.3 min to reach a final temperature of 7 C. At a higher convective heat transfer coefficient (100 W/m2K) and -70 C, a processing time of 1.3 min was sufficient to reach the target temperature of 7 C. The results may be used as a reference in process or equipment design for shell egg cooling in cryogenic CO2. PMID:12455606
Bayesian parametric estimation of stop-signal reaction time distributions.
Matzke, Dora; Dolan, Conor V; Logan, Gordon D; Brown, Scott D; Wagenmakers, Eric-Jan
2013-11-01
The cognitive concept of response inhibition can be measured with the stop-signal paradigm. In this paradigm, participants perform a 2-choice response time (RT) task where, on some of the trials, the primary task is interrupted by a stop signal that prompts participants to withhold their response. The dependent variable of interest is the latency of the unobservable stop response (stop-signal reaction time, or SSRT). Based on the horse race model (Logan & Cowan, 1984), several methods have been developed to estimate SSRTs. None of these approaches allow for the accurate estimation of the entire distribution of SSRTs. Here we introduce a Bayesian parametric approach that addresses this limitation. Our method is based on the assumptions of the horse race model and rests on the concept of censored distributions. We treat response inhibition as a censoring mechanism, where the distribution of RTs on the primary task (go RTs) is censored by the distribution of SSRTs. The method assumes that go RTs and SSRTs are ex-Gaussian distributed and uses Markov chain Monte Carlo sampling to obtain posterior distributions for the model parameters. The method can be applied to individual as well as hierarchical data structures. We present the results of a number of parameter recovery and robustness studies and apply our approach to published data from a stop-signal experiment. PMID:23163766
Parametric analysis of a magnetized cylindrical plasma
Ahedo, Eduardo
2009-11-15
The relevant macroscopic model, the spatial structure, and the parametric regimes of a low-pressure plasma confined by a cylinder and an axial magnetic field is discussed for the small-Debye length limit, making use of asymptotic techniques. The plasma response is fully characterized by three-dimensionless parameters, related to the electron gyroradius, and the electron and ion collision mean-free-paths. There are the unmagnetized regime, the main magnetized regime, and, for a low electron-collisionality plasma, an intermediate-magnetization regime. In the magnetized regimes, electron azimuthal inertia is shown to be a dominant phenomenon in part of the quasineutral plasma region and to set up before ion radial inertia. In the main magnetized regime, the plasma structure consists of a bulk diffusive region, a thin layer governed by electron inertia, a thinner sublayer controlled by ion inertia, and the non-neutral Debye sheath. The solution of the main inertial layer yields that the electron azimuthal energy near the wall is larger than the electron thermal energy, making electron resistivity effects non-negligible. The electron Boltzmann relation is satisfied only in the very vicinity of the Debye sheath edge. Ion collisionality effects are irrelevant in the magnetized regime. Simple scaling laws for plasma production and particle and energy fluxes to the wall are derived.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Multi-level approach for parametric roll analysis
NASA Astrophysics Data System (ADS)
Kim, Taeyoung; Kim, Yonghwan
2011-03-01
The present study considers multi-level approach for the analysis of parametric roll phenomena. Three kinds of computation method, GM variation, impulse response function (IRF), and Rankine panel method, are applied for the multi-level approach. IRF and Rankine panel method are based on the weakly nonlinear formulation which includes nonlinear Froude- Krylov and restoring forces. In the computation result of parametric roll occurrence test in regular waves, IRF and Rankine panel method show similar tendency. Although the GM variation approach predicts the occurrence of parametric roll at twice roll natural frequency, its frequency criteria shows a little difference. Nonlinear roll motion in bichromatic wave is also considered in this study. To prove the unstable roll motion in bichromatic waves, theoretical and numerical approaches are applied. The occurrence of parametric roll is theoretically examined by introducing the quasi-periodic Mathieu equation. Instability criteria are well predicted from stability analysis in theoretical approach. From the Fourier analysis, it has been verified that difference-frequency effects create the unstable roll motion. The occurrence of unstable roll motion in bichromatic wave is also observed in the experiment.
Parametric identification of a servo-hydraulic actuator for real-time hybrid simulation
NASA Astrophysics Data System (ADS)
Qian, Yili; Ou, Ge; Maghareh, Amin; Dyke, Shirley J.
2014-10-01
In a typical Real-time Hybrid Simulation (RTHS) setup, servo-hydraulic actuators serve as interfaces between the computational and physical substructures. Time delay introduced by actuator dynamics and complex interaction between the actuators and the specimen has detrimental effects on the stability and accuracy of RTHS. Therefore, a good understanding of servo-hydraulic actuator dynamics is a prerequisite for controller design and computational simulation of RTHS. This paper presents an easy-to-use parametric identification procedure for RTHS users to obtain re-useable actuator parameters for a range of payloads. The critical parameters in a linearized servo-hydraulic actuator model are optimally obtained from genetic algorithms (GA) based on experimental data collected from various specimen mass/stiffness combinations loaded to the target actuator. The actuator parameters demonstrate convincing convergence trend in GA. A key feature of this parametric modeling procedure is its re-usability under different testing scenarios, including different specimen mechanical properties and actuator inner-loop control gains. The models match well with experimental results. The benefit of the proposed parametric identification procedure has been demonstrated by (1) designing an H∞ controller with the identified system parameters that significantly improves RTHS performance; and (2) establishing an analysis and computational simulation of a servo-hydraulic system that help researchers interpret system instability and improve design of experiments.
Parametric investigation of Radome analysis methods. Volume 4: Experimental results
NASA Astrophysics Data System (ADS)
Bassett, H. L.; Newton, J. M.; Adams, W.; Ussailis, J. S.; Hadsell, M. J.; Huddleston, G. K.
1981-02-01
This Volume 4 of four volumes presents 140 measured far-field patterns and boresight error data for eight combinations of three monopulse antennas and five tangent ogive Rexolite radomes at 35 GHz. The antennas and radomes, all of different sizes, were selected to provide a range of parameters as found in the applications. The measured data serve as true data in the parametric investigation of radome analysis methods to determine the accuracies and ranges of validity of selected methods of analysis.
Enhanced optical squeezing from a degenerate parametric amplifier via time-delayed coherent feedback
NASA Astrophysics Data System (ADS)
Német, Nikolett; Parkins, Scott
2016-08-01
A particularly simple setup is introduced to study the influence of time-delayed coherent feedback on the optical squeezing properties of the degenerate parametric amplifier. The possibility for significantly enhanced squeezing is demonstrated both on resonance and in sidebands, at a reduced pump power compared to the case without feedback. We study a broad range of operating parameters and their influence on the characteristic squeezing of the system. A classical analysis of the system dynamics reveals the connection between the feedback-modified landscape of stability and enhanced squeezing.
Parametric analysis of transient skin heating induced by terahertz radiation.
Zilberti, Luca; Arduino, Alessandro; Bottauscio, Oriano; Chiampi, Mario
2014-07-01
This paper investigates the effect of relevant physical parameters on transient temperature elevation induced in human tissues by electromagnetic waves in the terahertz (THz) band. The problem is defined by assuming a plane wave, which, during a limited time interval, normally impinges on the surface of a 3-layer model of the human body, causing a thermal transient. The electromagnetic equations are solved analytically, while the thermal ones are handled according to the finite element method. A parametric analysis is performed with the aim of identifying the contribution of each parameter, showing that the properties of the first skin layer (except blood flow) play a major role in the computation of the maximum temperature rise for the considered exposure situation. Final results, obtained by combining all relevant parameters together, show that the deviation from the reference solution of the maximum temperature elevation in skin is included in the coverage intervals from -30% to +10% at 0.1 THz and from -33% to +18% at 1 THz (with 95% confidence level). These data allow bounding the possible temperature increase against the spread of tissue properties that could be reasonably used for dosimetric simulations. PMID:24510310
From wavelets to adaptive approximations: time-frequency parametrization of EEG.
Durka, Piotr J
2003-01-01
This paper presents a summary of time-frequency analysis of the electrical activity of the brain (EEG). It covers in details two major steps: introduction of wavelets and adaptive approximations. Presented studies include time-frequency solutions to several standard research and clinical problems, encountered in analysis of evoked potentials, sleep EEG, epileptic activities, ERD/ERS and pharmaco-EEG. Based upon these results we conclude that the matching pursuit algorithm provides a unified parametrization of EEG, applicable in a variety of experimental and clinical setups. This conclusion is followed by a brief discussion of the current state of the mathematical and algorithmical aspects of adaptive time-frequency approximations of signals. PMID:12605721
Scaramuzzino, F.
2009-09-09
This paper considers a qualitative analysis of the solution of a pure exchange general economic equilibrium problem according to two independent parameters. Some recently results obtained by the author in the static and the dynamic case have been collected. Such results have been applied in a particular parametric case: it has been focused the attention on a numerical application for which the existence of the solution of time-depending parametric variational inequality that describes the equilibrium conditions has been proved by means of the direct method. By using MatLab computation after a linear interpolation, the curves of equilibrium have been visualized.
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
Deriving the Coronal Magnetic Field Using Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Rose, M. Franklin (Technical Monitor)
2001-01-01
When plasma-beta greater than 1 then the gas pressure dominates over the magnetic pressure. This ratio as a function along the coronal magnetic field lines varies from beta greater than 1 in the photosphere at the base of the field lines, to beta much less than 1 in the mid-corona, to beta greater than 1 in the upper corona. Almost all magnetic field extrapolations do not or cannot take into account the full range of beta. They essentially assume beta much less than 1, since the full boundary conditions do not exist in the beta greater than 1 regions. We use a basic parametric representation of the magnetic field lines such that the field lines can be manipulated to match linear features in the EUV and SXR coronal images in a least squares sense. This research employs free-form deformation mathematics to generate the associated coronal magnetic field. In our research program, the complex magnetic field topology uses Parametric Transformation Analysis (PTA) which is a new and innovative method to describe the coronal fields that we are developing. In this technique the field lines can be viewed as being embedded in a plastic medium, the frozen-in-field-line concept. As the medium is deformed the field lines are similarly deformed. However the advantage of the PTA method is that the field line movement represents a transformation of one magnetic field solution into another magnetic field solution. When fully implemented, this method will allow the resulting magnetic field solution to fully match the magnetic field lines with EUV/SXR coronal loops by minimizing the differences in direction and dispersion of a collection of PTA magnetic field lines and observed field lines. The derived magnetic field will then allow beta greater than 1 regions to be included, the electric currents to be calculated, and the Lorentz force to be determined. The advantage of this technique is that the solution is: (1) independent of the upper and side boundary conditions, (2) allows non
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
NASA Astrophysics Data System (ADS)
Zaghari, Bahareh; Rustighi, Emiliano; Ghandchi Tehrani, Maryam
2015-03-01
Vibration energy harvesting is the transformation of vibration energy to electrical energy. The motivation of this work is to use vibration energy harvesting to power wireless sensors that could be used in inaccessible or hostile environments to transmit information for condition health monitoring. Although considerable work has been done in the area of energy harvesting, there is still a demand for making a robust and small vibration energy harvesters from random excitations in a real environment that can produce a reliable amount of energy. Parametrically excited harvesters can have time-varying stiffness. Parametric amplification is used to tune vibration energy harvesters to maximize energy gains at system superharmonics, often at twice the first natural frequency. In this paper the parametrically excited harvester with cubic and cubic parametric nonlinearity is introduced as a novel work. The advantages of having cubic and cubic nonlinearity are explained theoretically and experimentally.
NASA Astrophysics Data System (ADS)
Murray, E. M.; Cobourn, K.; Flores, A. N.; Pierce, J. L.
2014-12-01
As climate changes, the final date of spring snowmelt is projected to occur earlier in the year within the western United States. This earlier snowmelt timing may impact crop yield in snow-dominated watersheds by changing the timing of water delivery to agricultural fields. There is considerable uncertainty about how agricultural impacts of snowmelt timing may vary by region, crop-type, and practices like irrigation vs. dryland farming. Establishing the relationship between snowmelt timing and agricultural yield is important for understanding how changes in large-scale climatic indices (like snowmelt date) may be associated with changes in agricultural yield. A better understanding of the influence of changes in snowmelt on non-irrigated crop yield may additionally be extrapolated to better understand how climate change may alter biomass production in non-managed ecosystems. We utilized parametric regression techniques to isolate the magnitude of impact snowmelt timing has had on historical crop yield independently of climate and spatial variables that also impact yield. To do this, we examined the historical relationship between snowmelt timing and non-irrigated wheat and barley yield using a multiple linear regression model to predict yield in several Idaho counties as a function of snowmelt date, climate variables (precipitation and growing degree-days), and spatial differences between counties. We utilized non-parametric techniques to determine where snowmelt timing has positively versus negatively impacted yield. To do this, we developed classification and regression trees to identify spatial controls (e.g. latitude, elevation) on the relationship between snowmelt timing and yield. Most trends suggest a decrease in crop yield with earlier snowmelt, but a significant opposite relationship is observed in some regions of Idaho. An earlier snowmelt date occurring at high latitudes corresponds with higher than average wheat yield. Therefore, Northern Idaho may
Containment Sump Neutralization Using Trisodium Phosphate: Parametric Analysis
Zaki, Tarek G.
2002-07-01
For post-LOCA conditions, the pH of the aqueous solution collected in the containment sump after completion of injection of containment spray and ECC water, and all additives for reactivity control, fission product removal, and other purposes, should be maintained at a level sufficiently high to provide assurance that significant long-term iodine re-evolution does not occur. Long-term iodine retention may be assumed only when the equilibrium sump solution pH is above 7. This pH value should be achieved by the onset of the spray recirculation mode. A trisodium phosphate (TSP)-based, passive system can be used to achieve this pH value. This is a proven technology that is already in use in nuclear power plants. This system consists of several wire mesh baskets, filled with TSP and strategically located in the sump in order to insure timely dissolution of TSP and rapid pH rise under LOCA conditions. Accurate determination of the total quantity of TSP required to raise the pH of borated water in the sump to within the acceptable range is the key element to a proper design of this system. However, this type of analysis is quite involved and highly iterative, which requires the use of a computer program. This paper describes the basis for a computer program that determines the required quantity of TSP as a function of the quantity of borated water in the sump, the boron concentration, the sump temperature, and the specified pH value. The equilibrium quantities of boric acid species are calculated iteratively based on its molal equilibrium quotients. The equilibrium quantities of phosphoric acid species are calculated iteratively based on its dissociation constants. The charge balance error (CBE) is the sum of ionic charges for all species and ions in the solution, including sodium. All species are in equilibrium when the CBE reduces to zero. The paper also presents the results of a parametric analysis that is performed using this computer program. Ranges of borated water
Desiccant Enhanced Evaporative Air Conditioning: Parametric Analysis and Design; Preprint
Woods, J.; Kozubal, E.
2012-10-01
This paper presents a parametric analysis using a numerical model of a new concept in desiccant and evaporative air conditioning. The concept consists of two stages: a liquid desiccant dehumidifier and a dew-point evaporative cooler. Each stage consists of stacked air channel pairs separated by a plastic sheet. In the first stage, a liquid desiccant film removes moisture from the process (supply-side) air through a membrane. An evaporatively-cooled exhaust airstream on the other side of the plastic sheet cools the desiccant. The second-stage indirect evaporative cooler sensibly cools the dried process air. We analyze the tradeoff between device size and energy efficiency. This tradeoff depends strongly on process air channel thicknesses, the ratio of first-stage to second-stage area, and the second-stage exhaust air flow rate. A sensitivity analysis reiterates the importance of the process air boundary layers and suggests a need for increasing airside heat and mass transfer enhancements.
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Syndrome Surveillance Using Parametric Space-Time Clustering
KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.
2002-11-01
As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. PMID:27068864
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.
2014-01-01
Background Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called “uncorrelated relaxed clock” where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. Results We develop a maximum likelihood method – Physher – that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. Conclusions These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/. PMID:25055743
Interactive flutter analysis and parametric study for conceptual wing design
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1995-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed on MathCad (trademark) platform for Macintosh, with integrated documentation, graphics, database and symbolic mathematics. The analysis method was based on nondimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The plots were compiled in a Vaught Corporation report from a vast database of past experiments and wind tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended Wing Body concept, proposed by McDonnell Douglas Corporation. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
NASA Astrophysics Data System (ADS)
Wei, Sha; Han, Qinkai; Peng, Zhike; Chu, Fulei
2016-05-01
Some system parameters in mechanical systems are always uncertain due to uncertainties in geometric and material properties, lubrication condition and wear. For a more reasonable estimation of dynamic analysis of the parametrically excited system, the effect of uncertain parameters should be taken into account. This paper presents a new non-probabilistic analysis method for solving the dynamic responses of parametrically excited systems under uncertainties and multi-frequency excitations. By using the multi-dimensional harmonic balance method (MHBM) and the Chebyshev inclusion function (CIF), an interval multi-dimensional harmonic balance method (IMHBM) is obtained. To illustrate the accuracy of the proposed method, a time-varying geared system of wind turbine with different kinds of uncertainties is demonstrated. By comparing with the results of the scanning method, it is shown that the presented method is valid and effective for the parametrically excited system with uncertainties and multi-frequency excitations. The effects of some uncertain system parameters including uncertain mesh stiffnesses and uncertain bearing stiffnesses on the frequency responses of the system are also discussed in detail. It is shown that the dynamic responses of the system are insensitive to the uncertain mesh stiffness and bearing stiffnesses of the planetary gear stage. The uncertain bearing stiffnesses of the intermediate and high-speed stages will lead to relatively large uncertainties in the dynamic responses around resonant regions. It will provide valuable guidance for the optimal design and condition monitoring of wind turbine gearboxes.
Parametric analysis of orthopedic screws in relation to bone density.
Zanetti, Elisabetta M; Salaorno, Massimiliano; Grasso, Giovanni; Audenino, Alberto L
2009-01-01
A global study of geometry and material properties of orthopedic screws was performed, considering not only the effect of each single factor (screw pitch, number of threads, fillet angle, etc.) but also their interactions with respect to bone density.The stress patterns resulting from different screw geometries and bone densities were analyzed using finite element techniques, taking into account different levels of osseointegration between the screw and the bone. These numerical models where validated through experimental pull-out tests, where a pull out force of 120 N produced localized failure of the last thread (stresses above 0.42 MPa). The results of the numerical simulations were then summarised using a multi-factorial parametric analysis. This demonstrated the great relevance of the interaction between bone density and screw pitch, showing that the optimal screw pitch can vary by more than 25% for different densities (0.35 g/cm(3) and 0.47 g/cm(3), respectively).The parameters calculated by means of the multi-factorial analysis allow the pull out force to be estimated for different osseointegration levels, different screw geometries and material properties, and for different bone densities. The final objective is to determine the best choice of implant for each individual patient. PMID:19587807
NASA Astrophysics Data System (ADS)
Andronov, I. L.; Chinarova, L. L.
Numerical comparison of the methods for periodogram analysis is carried out for the parametric modifications of the Fourier transform by Deeming T.J. (1975, Ap. Space Sci., 36, 137); Lomb N.R. (1976, Ap. Space Sci., 39, 447); Andronov I.L. (1994, Odessa Astron. Publ., 7, 49); parametric modifications based on the spline approximations of different order k and defect k by Jurkevich I. (1971, Ap. Space Sci., 13, 154; n = 0, k = 1); Marraco H.G., Muzzio J.C. (1980, P.A.S.P., 92, 700; n = 1, k = 2); Andronov I.L. (1987, Contrib. Astron. Inst. Czechoslovak. 20, 161; n = 3, k = 1); non-parametric modifications by Lafler J. and Kinman T.D. (1965, Ap.J.Suppl., 11, 216), Burke E.W., Rolland W.W. and Boy W.R. (1970, J.R.A.S.Canada, 64, 353), Deeming T.J. (1970, M.N.R.A.S., 147, 365), Renson P. (1978, As. Ap., 63, 125) and Dworetsky M.M. (1983, M.N.R.A.S., 203, 917). For some numerical models the values of the mean, variance, asymmetry and excess of the test-functions are determined, the correlations between them are discussed. Analytic estimates of the mathematical expectation of the test function for different methods and of the dispersion of the test function by Lafler and Kinman (1965) and of the parametric functions are determined. The statistical distribution of the test functions computed for fixed data and various frequencies is significantly different from that computed for various data realizations. The histogram for the non-parametric test functions is nearly symmetric for normally distributed uncorrelated data and is characterized by a distinctly negative asymmetry for noisy data with periodic components. The non-parametric test-functions may be subdivided into two groups - similar to that by Lafler and Kinman (1965) and to that by Deeming (1970). The correlation coefficients for the test-functions within each group are close to unity for large number of data. Conditions for significant influence of the phase difference between the data onto the test functions are
Parametric analysis of a passive cyclic control device for helicopters
NASA Technical Reports Server (NTRS)
Kumagai, H.
1984-01-01
A parametric study of a passive device which provides a cyclic longitudinal control moment for a helicopter rotor was performed. It utilizes a rotor blade tip which is structurally decoupled from the blade inboard section. This rotor configuration is generally called the Free-Tip Rotor. A two dimensional numerical model was used to review the Constant Lift Tip Rotor, a predecessor of the current configuration, and then the same model was applied to the Passive Cyclic Control Device. The Constant Lift Tip was proven to have the ability to suppress the vibratory lift loading on the tip around the azimuth and to eliminate a significant negative lift peak on the advancing tip. The Passive Cyclic Control Device showed a once-per-revolution lift oscillation with a large amplitude, while minimizing the higher harmonic terms of the lift oscillation. This once-per-revolution oscillation results in the cyclic moment to trim the rotor longitudinally. A rotor performance analysis was performed by a three dimensional numerical model. It indicated that the vortices shed from the junction between the tip and the inboard section has a strong influence on the tip, and it may severely limit the tip performance. It was also shown that the Free-Tip allows the inboard section to have a larger twist, which results in a better performance.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Second-order contributions to relativistic time delay in the parametrized post-Newtonian formalism
Richter, G.W.; Matzner, R.A.
1983-12-15
Using a parametrized expansion of the solar metric to second order in the Newtonian potential, we calculate the relativistic delay in the round-trip travel time of a radar signal reflected from a nearby planet. We find that one second-order contribution to the delay is on the order of ten nanoseconds, which is comparable to the uncertainties in present-day experiments involving the Viking spacecraft.
NASA Astrophysics Data System (ADS)
Kraft, Manuel; Hein, Sven M.; Lehnert, Judith; Schöll, Eckehard; Hughes, Stephen; Knorr, Andreas
2016-08-01
Quantum coherent feedback control is a measurement-free control method fully preserving quantum coherence. In this paper we show how time-delayed quantum coherent feedback can be used to control the degree of squeezing in the output field of a cavity containing a degenerate parametric oscillator. We focus on the specific situation of Pyragas-type feedback control where time-delayed signals are fed back directly into the quantum system. Our results show how time-delayed feedback can enhance or decrease the degree of squeezing as a function of time delay and feedback strength.
SAT-Based (Parametric) Reachability for a Class of Distributed Time Petri Nets
NASA Astrophysics Data System (ADS)
Penczek, Wojciech; Pòłrola, Agata; Zbrzezny, Andrzej
Formal methods - among them the model checking techniques - play an important role in the design and production of both systems and software. In this paper we deal with an adaptation of the bounded model checking methods for timed systems, developed for timed automata, to the case of time Petri nets. We consider distributed time Petri nets and parametric reachability checking, but the approach can be easily adapted to verification of other kinds of properties for which the bounded model checking methods exist. A theoretical description is supported by some experimental results, generated using an extension of the model checker verICS.
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
Rosenberg, D; Marino, R; Herbert, C; Pouquet, A
2016-01-01
We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow. PMID:26830757
NASA Astrophysics Data System (ADS)
Wu, Zhishen; Xu, Bin
2003-07-01
A structural parametric identification strategy based on neural networks algorithms using dynamic macro-strain measurements in time domain from a long-gage strain sensor by fiber optic sensing technique such as Fiber Bragg Grating (FBG) sensor is developed. An array of long-gage sensors is bounded on the structure to measure reliably and accurately macro-strains. By the proposed methodology, the structural parameter of stiffness can be identified. A beam model with known mass distribution is considered as an object structure. Without any eigenvalue analysis or optimization computation, the structural parameter of stiffness can be identified. First an emulator neural network is presented to identify the beam structure in current state. Free vibration macro-strain responses of the beam structure are used to train the emulator neural network. The trained emulator neural network can be used to forecast the free vibration macro-strain response of the beam structure with enough precision and decide the difference between the free vibration macro-strain responses of other assumed structure with different structural parameters and those of the original beam structure. The root mean square (RMS) error vector is presented to evaluate the difference. Subsequently, corresponding to each assumed structure with different structural parameters, the RMS error vector can be calculated. By using the training data set composed of the structural parameters and RMS error vector, a parametric evaluation neural network is trained. A beam structure is considered as an existing structure, based on the trained parametric evaluation neural network, the stiffness of the beam structure can be forecast. It is shown that the parametric identification strategy using macro-strain measurement from long-gage sensors has the potential of being a practical tool for a health monitoring methodology applied to civil engineering structures.
Ressler, Johann; Dirscherl, Andreas; Grothe, Helmut; Wolf, Bernhard
2007-02-01
In many cases of bioanalytical measurement, calculation of large amounts of data, analysis of complex signal waveforms or signal speed can overwhelm the performance of microcontrollers, analog electronic circuits or even PCs. One method to obtain results in real time is to apply a digital signal processor (DSP) for the analysis or processing of measurement data. In this paper we show how DSP-supported multiplying and accumulating (MAC) operations, such as time/frequency transformation, pattern recognition by correlation, convolution or filter algorithms, can optimize the processing of bioanalytical data. Discrete integral calculations are applied to the acquisition of impedance values as part of multi-parametric sensor chips, to pH monitoring using light-addressable potentiometric sensors (LAPS) and to the analysis of rapidly changing signal shapes, such as action potentials of cultured neuronal networks, as examples of DSP capability. PMID:17313351
Theoretical analysis of terahertz parametric oscillator using KTiOPO4 crystal
NASA Astrophysics Data System (ADS)
Li, Zhongyang; Bing, Pibin; Yuan, Sheng
2016-08-01
Terahertz parametric oscillator (TPO) using KTiOPO4 (KTP) crystal with a noncollinear phase-matching scheme is investigated. Frequency tuning characteristics of terahertz wave (THz-wave) by varying the phase-matching angle and pump wavelength are analyzed. The expression of the effective parametric gain length under the noncollinear phase matching condition is deduced. Parametric gain and absorption characteristics of THz-wave in KTP are theoretically simulated for the first time. The characteristics of KTP for TPO are compared with MgO:LiNbO3. The analyses indicate that KTP is more suitable than MgO:LiNbO3 for TPO.
NASA Technical Reports Server (NTRS)
Pandya, Shishir; Chaderjian, Neal; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
A process is described which enables the generation of 35 time-dependent viscous solutions for a YAV-8B Harrier in ground effect in one week. Overset grids are used to model the complex geometry of the Harrier aircraft and the interaction of its jets with the ground plane and low-speed ambient flow. The time required to complete this parametric study is drastically reduced through the use of process automation, modern computational platforms, and parallel computing. Moreover, a dual-time-stepping algorithm is described which improves solution robustness. Unsteady flow visualization and a frequency domain analysis are also used to identify and correlated key flow structures with the time variation of lift.
a Model for the Parametric Analysis and Optimization of Inertance Tube Pulse Tube Refrigerators
NASA Astrophysics Data System (ADS)
Dodson, C.; Lopez, A.; Roberts, T.; Razani, A.
2008-03-01
A first order model developed for the design analysis and optimization of Inertance Tube Pulse Tube Refrigerators (ITPTRs) is integrated with the code NIST REGEN 3.2 capable of modeling the regenerative heat exchangers used in ITPTRs. The model is based on the solution of simultaneous non-linear differential equations representing the inertance tube, an irreversibility parameter model for the pulse tube, and REGEN 3.2 to simulate the regenerator. The integration of REGEN 3.2 is accomplished by assuming a sinusoidal pressure wave at the cold side of the regenerator. In this manner, the computational power of REGEN 3.2 is conveniently used to reduce computational time required for parametric analysis and optimization of ITPTRs. The exergy flow and exergy destruction (irreversibility) of each component of ITPTRs is calculated and the effect of important system parameters on the second law efficiency of the refrigerators is presented.
Cole, B F; Gelber, R D; Anderson, K M
1994-09-01
We present a parametric methodology for performing quality-of-life-adjusted survival analysis using multivariate censored survival data. It represents a generalization of the nonparametric Q-TWiST method (Quality-adjusted Time without Symptoms and Toxicity). The event times correspond to transitions between states of health that differ in terms of quality of life. Each transition is governed by a competing risks model where the health states are the competing risks. Overall survival is the sum of the amount of time spent in each health state. The first step of the proposed methodology consists of defining a quality function that assigns a "score" to a life having given health state transitions. It is a composite measure of both quantity and quality of life. In general, the quality function assigns a small value to a short life with poor quality and a high value to a long life with good quality. In the second step, parametric survival models are fit to the data. This is done by repeatedly modeling the conditional cause-specific hazard functions given the previous transitions. Covariates are incorporated by accelerated failure time regression, and the model parameters are estimated by maximum likelihood. Lastly, the modeling results are used to estimate the expectation of quality functions. Standard errors and confidence intervals are computed using the bootstrap and delta methods. The results are useful for simultaneously evaluating treatments in terms of quantity and quality of life. To demonstrate the proposed methods, we perform an analysis of data from the International Breast Cancer Study Group Trial V, which compared short-duration chemotherapy versus long-duration chemotherapy in the treatment of node-positive breast cancer. The events studied are: (1) the end of treatment toxicity, (2) disease recurrence, and (3) overall survival. PMID:7981389
Parametric analysis of a thermionic space nuclear power system
NASA Technical Reports Server (NTRS)
Strohmayer, W. H.; Van Hagan, T. H.
1987-01-01
Key parameters in the design of a thermionic space nuclear power system are identified and analysed in various system tradeoffs. The results are referenced to the thermionic system currently being studied for the SP-100 program. The SP-100 requirements provide definitive guidelines with respect to system optimization, the primary ones being the system mass limit of 3000 kg, the system volume constrraint of one-third of the Space Shuttle cargo bay, and the system lifetime of seven years. Many parametric influences are described and the methods used to optimize system design, in the context of the requirements, are indicated. Considerable design flexiblity is demonstrated.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. PMID:25817475
NASA Astrophysics Data System (ADS)
Cunningham, Robert K.; Waxman, Allen M.
1991-06-01
This is the first Annual Technical Summary of the MIT Lincoln Laboratory effort into the parametric study of diffusion-enhancement networks for spatiotemporal grouping in real-time artificial vision. Spatiotemporal grouping phenomena are examined in the context of static and time-varying imagery. Dynamics that exhibit static feature grouping on multiple scales as a function of time and long-range apparent motion between time-varying inputs are developed for a biologically plausible diffusion-enhancement bilayer. The architecture consists of a diffusion and a contrast-enhancement layer coupled by feedforward and feedback connections: input is provided by a separate feature extracting layer. The model is cast as an analog circuit that is realizable in VLSI, the parameters of which are selected to satisfy a psychophysical database on apparent motion. Specific topics include: neural networks, astrocyte glial networks, diffusion enhancement, long-range apparent motion, spatiotemporal grouping dynamics, and interference suppression.
Parametric Studies of Square Solar Sails Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Sleight, David W.; Muheim, Danniella M.
2004-01-01
Parametric studies are performed on two generic square solar sail designs to identify parameters of interest. The studies are performed on systems-level models of full-scale solar sails, and include geometric nonlinearity and inertia relief, and use a Newton-Raphson scheme to apply sail pre-tensioning and solar pressure. Computational strategies and difficulties encountered during the analyses are also addressed. The purpose of this paper is not to compare the benefits of one sail design over the other. Instead, the results of the parametric studies may be used to identify general response trends, and areas of potential nonlinear structural interactions for future studies. The effects of sail size, sail membrane pre-stress, sail membrane thickness, and boom stiffness on the sail membrane and boom deformations, boom loads, and vibration frequencies are studied. Over the range of parameters studied, the maximum sail deflection and boom deformations are a nonlinear function of the sail properties. In general, the vibration frequencies and modes are closely spaced. For some vibration mode shapes, local deformation patterns that dominate the response are identified. These localized patterns are attributed to the presence of negative stresses in the sail membrane that are artifacts of the assumption of ignoring the effects of wrinkling in the modeling process, and are not believed to be physically meaningful. Over the range of parameters studied, several regions of potential nonlinear modal interaction are identified.
Inverse synthetic aperture radar processing using parametric time-frequency estimators Phase I
Candy, J.V., LLNL
1997-12-31
This report summarizes the work performed for the Office of the Chief of Naval Research (ONR) during the period of 1 September 1997 through 31 December 1997. The primary objective of this research was aimed at developing an alternative time-frequency approach which is recursive-in-time to be applied to the Inverse Synthethic Aperture Radar (ISAR) imaging problem discussed subsequently. Our short term (Phase I) goals were to: 1. Develop an ISAR stepped-frequency waveform (SFWF) radar simulator based on a point scatterer vehicular target model incorporating both translational and rotational motion; 2. Develop a parametric, recursive-in-time approach to the ISAR target imaging problem; 3. Apply the standard time-frequency short-term Fourier transform (STFT) estimator, initially to a synthesized data set; and 4. Initiate the development of the recursive algorithm. We have achieved all of these goals during the Phase I of the project and plan to complete the overall development, application and comparison of the parametric approach to other time-frequency estimators (STFT, etc.) on our synthesized vehicular data sets during the next phase of funding. It should also be noted that we developed a batch minimum variance translational motion compensation (TMC) algorithm to estimate the radial components of target motion (see Section IV). This algorithm is easily extended to recursive solution and will probably become part of the overall recursive processing approach to solve the ISAR imaging problem. Our goals for the continued effort are to: 1. Develop and extend a complex, recursive-in-time, time- frequency parameter estimator based on the recursive prediction error method (RPEM) using the underlying Gauss- Newton algorithms. 2. Apply the complex RPEM algorithm to synthesized ISAR data using the above simulator. 3. Compare the performance of the proposed algorithm to standard time-frequency estimators applied to the same data sets.
Time resolved imaging using non-collinear parametric down-conversion
NASA Astrophysics Data System (ADS)
Park, Jung-Rae
In this thesis I present a method for measuring the time resolved spatial profile of a single laser pulse and its application to the semiconductor devices. In OMEGA laser system, spatial profile of a laser beam can change as a function of time due to spontaneous effects such as the B-integral or imposed effects such as smoothing by spectral dispersion. The method presented here uses a non-collinear parametric down-conversion process to multiply sample a single laser pulse. In the non-collinear parametric down-conversion process, an infrared laser beam at 1064 nm is mixed with an intense ultraviolet beam at 351 nm to generate the green signal beam at 524 nm. Calculations have been carried out to determine the threshold power of the infrared probe beam for generating a detectable signal beam. The generated green beam is captured by a cooled optical multichannel analyzer camera and the image of signal beam is analyzed. This temporal spatial measurement can also be applied to the dynamic image detection schemes of semiconductor devices.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
Non-parametric estimation of seasonal variations in GNSS-derived time series
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Bogusz, Janusz; Klos, Anna
2015-04-01
The seasonal variations in GNSS station's position may arise from geophysical excitations, thermal changes combined together with hydrodynamics or various errors which, when superimposed, cause the seasonal oscillations not exactly of real geodynamical origin, but still have to be included in time series modelling. These variations with different periods included in frequency band from Chandler up to quarter-annual ones will all affect the reliability of permanent station's velocity, which in turn, strictly influences the quality of kinematic reference frames. As shown before by a number of authors, the annual (dominant) sine curve, has the amplitude and phase that both change in time due to the different reasons. In this research we focused on the determination of annual changes in GNSS-derived time series of North, East and Up components. We used here the daily position changes from PPP (Precise Point Positioning) solution obtained by JPL (Jet Propulsion Laboratory) processed in the GIPSY-OASIS software. We analyzed here more than 140 globally distributed IGS stations with the minimum data length of 3 years. The longest time series were even 17 years long (1996-2014). Each of the topocentric time series (North, East and Up) was divided into years (from January to December), then the observations gathered in the same days of year were stacked and the weighted medians obtained for all of them such that each of time series was represented by matrix of size 365xn where n is the data length. In this way we obtained the median annual signal for each of analyzed stations that was then decomposed into different frequency bands using wavelet decomposition with Meyer wavelet. We assumed here 7 levels of decomposition, with annual curve as the last approximation of it. The signal approximations made us to obtain the seasonal peaks that prevail in North, East and Up data for globally distributed stations. The analysis of annual curves, by means of non-parametric estimation
Real-time tuning of a double quantum dot using a Josephson parametric amplifier
NASA Astrophysics Data System (ADS)
Stehlik, J.; Liu, Y.-Y.; Quintana, C. M.; Eichler, C.; Hartke, T. R.; Petta, J. R.
Josephson parametric amplifiers (JPAs) have enabled advances in readout of quantum systems. Here we demonstrate JPA-assisted readout of a cavity-coupled double quantum dot (DQD). Utilizing a JPA we improve the signal-to-noise ratio (SNR) by a factor of 2000 compared to the situation with the parametric amplifier turned off. At an interdot charge transition we achieve a SNR of 76 (19 dB) with an integration time τ = 400 ns, which is limited by the linewidth of our cavity. By measuring the SNR as a function of τ we extract an equivalent charge sensitivity of 8 ×10-5 e /√{ Hz} . We develop a dual-gate-voltage rastering scheme that allows us to acquire a DQD charge stability diagram in just 20 ms. Such rapid data acquisition rates enable device tuning in live ``video-mode,'' where the results of parameter changes are immediately displayed. Live tuning allows the DQD confinement potential to be rapidly tuned, a capability that will become increasingly important as semiconductor spin qubits are scaled to a larger number of dots. Research is supported by the Packard Foundation, ARO Grant No. W911NF-15-1-0149, DARPA QuEST Grant No. HR0011-09-1-0007, and the NSF (Grants No. DMR-1409556 and DMR-1420541).
Real-time gas density measurement using a ring cavity terahertz parametric oscillator
NASA Astrophysics Data System (ADS)
Ohno, S.; Guo, R.; Minamide, H.; Ito, H.
2007-05-01
We carried out real-time measurement of gas density using monochromatic terahertz waves. The THz-wave absorbance is useful to measure the density of a gas having a characteristic spectrum in the THz region. We used the ring cavity THz-wave parametric oscillator (ring-TPO) as a monochromatic tunable THz-wave source. One can change the oscillation frequency of ring-TPO with a rotating galvano mirror forming the ring cavity. The frequency can be changed by synchronization with a repeating pump-pulse of 500 Hz. We demonstrated real-time measurement of the gas density in R-22, which had some spectral structure in THz frequency region. The gas density in the sample cell was changed by controlling the pressure to lower than 1 atm. When the gas density in the cell was the most tenuous, the maximum sensitivity was about 5%, which was limited by the fluctuation of THz-wave intensity.
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
Parametric Rietveld refinement
Stinton, Graham W.; Evans, John S. O.
2007-01-01
In this paper the method of parametric Rietveld refinement is described, in which an ensemble of diffraction data collected as a function of time, temperature, pressure or any other variable are fitted to a single evolving structural model. Parametric refinement offers a number of potential benefits over independent or sequential analysis. It can lead to higher precision of refined parameters, offers the possibility of applying physically realistic models during data analysis, allows the refinement of ‘non-crystallographic’ quantities such as temperature or rate constants directly from diffraction data, and can help avoid false minima. PMID:19461841
NASA Astrophysics Data System (ADS)
Ruiz Dominguez, C.; Kachenoura, N.; DeCesare, A.; Delouche, A.; Lim, P.; Gérard, O.; Herment, A.; Diebold, B.; Frouin, F.
2005-07-01
The computerized study of the regional contraction of the left ventricle has undergone numerous developments, particularly in relation to echocardiography. A new method, parametric analysis of main motion (PAMM), is proposed in order to synthesize the information contained in a cine loop of images in parametric images. PAMM determines, for the intensity variation time curves (IVTC) observed in each pixel, two amplitude coefficients characterizing the continuous component and the alternating component; the variable component is generated from a mother curve by introducing a time shift coefficient and a scale coefficient. Two approaches, a PAMM data driven and a PAMM model driven (simpler and faster), are proposed. On the basis of the four coefficients, an amplitude image and an image of mean contraction time are synthesized and interpreted by a cardiologist. In all cases, both PAMM methods allow better IVTC adjustment than the other methods of parametric imaging used in echocardiography. A preliminary database comprising 70 segments is scored and compared with the visual analysis, taken from a consensus of two expert interpreters. The levels of absolute and relative concordance are 79% and 97%. PAMM model driven is a promising method for the rapid detection of abnormalities in left ventricle contraction.
Parametric systems analysis of the Modular Stellarator Reactor (MSR)
NASA Astrophysics Data System (ADS)
Miller, R. L.; Krakowski, R. A.; Bathke, C. G.
1982-05-01
The close coupling in the stellarator/torsatron/heliotron (S/T/H) between coil design, magnetics topology, and plasma performance complicates the reactor assessment more so than for most magnetic confinement systems. To provide an additional degree of resolution of this problem for the Modular Stellarator Reactor (MSR), a parametric systems model was applied. This model reduces key issues associated with plasma performance, first wall/blanket/shield (FW/B/S), and coil design to a simple relationship between beta, system geometry, and a number of indicators of overall plant performance. The results are used to guide more detailed, multidimensional plasma, magnetics, and coil design efforts towards technically and economically viable operating regimes. It is shown that beta values 0.08 may be needed if the MSR approach is to be substantially competitive with other approaches to magnetic fusion in terms of system power density, mass utilization, and cost for total power output around 4.0 GWt; lower powers will require even higher betas.
Lunar lander configuration study and parametric performance analysis
NASA Astrophysics Data System (ADS)
Donahue, Benjamin B.; Fowler, C. R.
1993-06-01
Future Lunar exploration plans will call for delivery of significant mounts or cargo to provide for crew habitation, surface tansportation, and scientific exploration activities. Minimization of costly surface based infrastructure is in large part directly related to the design of the cargo delivery/landing craft. This study focused on evaluating Lunar lander concepts from a logistics oriented perspective, and outlines the approach used in the development of a preferred configuration, sets forth the benefits derived from its utilization and describes the missions and system considered. Results indicate that only direct-to-surface downloading of payloads provides for unassisted cargo removal operations imperative to efficient and low risk site buildup, including the emplacement of Space Station derivative surface habitat modules, immediate cargo jettison for both descent abort and emergency surface ascent essential to piloted missions carrying cargo, and short habitat egress/ingress paths necessary to productive surface work tours for crew members carrying hand held experiments, tools and other bulky articles. Accommodating cargo in a position underneath the vehicles structural frame, landing craft described herein eliminate altogether the necessity for dedicated surface based off-loading vehicles, the operations and maintenance associated with their operation, and the precipitous ladder climbs to and from the surface that are inherent to traditional designs. Parametric evaluations illustrate performance and mass variation with respect to mission requirements.
Lunar lander configuration study and parametric performance analysis
NASA Technical Reports Server (NTRS)
Donahue, Benjamin B.; Fowler, C. R.
1993-01-01
Future Lunar exploration plans will call for delivery of significant mounts or cargo to provide for crew habitation, surface tansportation, and scientific exploration activities. Minimization of costly surface based infrastructure is in large part directly related to the design of the cargo delivery/landing craft. This study focused on evaluating Lunar lander concepts from a logistics oriented perspective, and outlines the approach used in the development of a preferred configuration, sets forth the benefits derived from its utilization and describes the missions and system considered. Results indicate that only direct-to-surface downloading of payloads provides for unassisted cargo removal operations imperative to efficient and low risk site buildup, including the emplacement of Space Station derivative surface habitat modules, immediate cargo jettison for both descent abort and emergency surface ascent essential to piloted missions carrying cargo, and short habitat egress/ingress paths necessary to productive surface work tours for crew members carrying hand held experiments, tools and other bulky articles. Accommodating cargo in a position underneath the vehicles structural frame, landing craft described herein eliminate altogether the necessity for dedicated surface based off-loading vehicles, the operations and maintenance associated with their operation, and the precipitous ladder climbs to and from the surface that are inherent to traditional designs. Parametric evaluations illustrate performance and mass variation with respect to mission requirements.
Network of time-multiplexed optical parametric oscillators as a coherent Ising machine
NASA Astrophysics Data System (ADS)
Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa
2014-12-01
Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.
NASA Technical Reports Server (NTRS)
Marston, C. H.; Alyea, F. N.; Bender, D. J.; Davis, L. K.; Dellinger, T. C.; Hnat, J. G.; Komito, E. H.; Peterson, C. A.; Rogers, D. A.; Roman, A. J.
1980-01-01
The performance and cost of moderate technology coal-fired open cycle MHD/steam power plant designs which can be expected to require a shorter development time and have a lower development cost than previously considered mature OCMHD/steam plants were determined. Three base cases were considered: an indirectly-fired high temperature air heater (HTAH) subsystem delivering air at 2700 F, fired by a state of the art atmospheric pressure gasifier, and the HTAH subsystem was deleted and oxygen enrichment was used to obtain requisite MHD combustion temperature. Coal pile to bus bar efficiencies in ease case 1 ranged from 41.4% to 42.9%, and cost of electricity (COE) was highest of the three base cases. For base case 2 the efficiency range was 42.0% to 45.6%, and COE was lowest. For base case 3 the efficiency range was 42.9% to 44.4%, and COE was intermediate. The best parametric cases in bases cases 2 and 3 are recommended for conceptual design. Eventual choice between these approaches is dependent on further evaluation of the tradeoffs among HTAH development risk, O2 plant integration, and further refinements of comparative costs.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Reluctance network analysis of an orthogonal-core type parametric induction motor
Tajima, Katsubumi; Sato, Kohei; Komukai, Toshihiko; Ichinokura, Osamu
1999-09-01
In this paper, an analytical method of an orthogonal-core type parametric induction motor is proposed, based on a reluctance network model of the stator. The model is derived by a similar technique applied to an orthogonal-core transformer. Using this model the parametric oscillation characteristic of the motor, without a rotor, is computed. The simulation results agree well with the experiments. It is obvious that the analytical model of the stator presented here is proper for analysis of the motor and that, by use of this model and suitable analytical model of the rotor, the motor characteristics can be analyzed.
Analysis of 808nm centered optical parametric chirped pulse amplifier based on DKDP crystals
NASA Astrophysics Data System (ADS)
Sun, Meizhi; Cui, Zijian; Kang, Jun; Zhang, Yanli; Zhang, Junyong; Cui, Ying; Xie, Xinglong; Liu, Cheng; Liu, Dean; Zhu, Jianqiang; Lin, Zunqi
2015-08-01
The non-collinear phase-matching in Potassium Dideuterium Phosphate (DKDP) crystal is analyzed in detail with signal pulse of center wavelength at 808 nm and pump pulse of wavelength at 526.5 nm. By numerical analysis, parametric bandwidths for various DKDP crystals of different deuteration level are presented. In particularly for DKDP crystals of 95% deuteration level, the optimal non-collinear angles, phase-matching angles, parametric bandwidths, walk-off angles, acceptance angles, efficiency coefficients, gain and gain bandwidths are provided based on the parameter concepts. Optical parametric chirped pulse amplifier based on DKDP crystal is designed and the output characteristics are simulated by OPA coupled wave equations for further discuss. It is concluded that DKDP crystals higher than 90% deuteration level can be utilized in ultra-short high power laser systems with compressed pulses broader than 30fs. The disadvantage is that the acceptance angles are small, increasing the difficulty of engineering regulation.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2003-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.
Augustine, C.
2013-10-01
Parametric analysis of the factors controlling the costs of sedimentary geothermal systems was carried out using a modified version of the Geothermal Electricity Technology Evaluation Model (GETEM). The sedimentary system modeled assumed production from and injection into a single sedimentary formation.
Bernasconi, Davide Paolo; Rebora, Paola; Iacobelli, Simona; Valsecchi, Maria Grazia; Antolini, Laura
2016-03-30
The 'landmark' and 'Simon and Makuch' non-parametric estimators of the survival function are commonly used to contrast the survival experience of time-dependent treatment groups in applications such as stem cell transplant versus chemotherapy in leukemia. However, the theoretical survival functions corresponding to the second approach were not clearly defined in the literature, and the use of the 'Simon and Makuch' estimator was criticized in the biostatistical community. Here, we review the 'landmark' approach, showing that it focuses on the average survival of patients conditional on being failure free and on the treatment status assessed at the landmark time. We argue that the 'Simon and Makuch' approach represents counterfactual survival probabilities where treatment status is forced to be fixed: the patient is thought as under chemotherapy without possibility to switch treatment or as under transplant since the beginning of the follow-up. We argue that the 'Simon and Makuch' estimator leads to valid estimates only under the Markov assumption, which is however less likely to occur in practical applications. This motivates the development of a novel approach based on time rescaling, which leads to suitable estimates of the counterfactual probabilities in a semi-Markov process. The method is also extended to deal with a fixed landmark time of interest. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26503800
Mounaix, P.; Pesme, D. ); Rozmus, W. ); Casanova, M. )
1993-09-01
The space and time behavior of the decay waves is computed analytically in the regime of standard parametric decay. The plasma is assumed to be homogeneous and bounded. The pump wave has a finite pulse duration. The propagation of the pump wave is taken into account, its depletion is ignored. The parametric growth is solved in terms of fluctuating initial and boundary conditions corresponding to thermal noise at equilibrium. Fluctuating source terms, representing noise emission, are accordingly retained in the coupled mode equations. The initial stage of parametric growth is investigated in detail; the time from which the asymptotic concept of absolute or convective instability applies is computed. The connection between the Manley--Rowe and flux conservation relations is discussed.
AWclust: point-and-click software for non-parametric population structure analysis
Gao, Xiaoyi; Starmer, Joshua D
2008-01-01
Background Population structure analysis is important to genetic association studies and evolutionary investigations. Parametric approaches, e.g. STRUCTURE and L-POP, usually assume Hardy-Weinberg equilibrium (HWE) and linkage equilibrium among loci in sample population individuals. However, the assumptions may not hold and allele frequency estimation may not be accurate in some data sets. The improved version of STRUCTURE (version 2.1) can incorporate linkage information among loci but is still sensitive to high background linkage disequilibrium. Nowadays, large-scale single nucleotide polymorphisms (SNPs) are becoming popular in genetic studies. Therefore, it is imperative to have software that makes full use of these genetic data to generate inference even when model assumptions do not hold or allele frequency estimation suffers from high variation. Results We have developed point-and-click software for non-parametric population structure analysis distributed as an R package. The software takes advantage of the large number of SNPs available to categorize individuals into ethnically similar clusters and it does not require assumptions about population models. Nor does it estimate allele frequencies. Moreover, this software can also infer the optimal number of populations. Conclusion Our software tool employs non-parametric approaches to assign individuals to clusters using SNPs. It provides efficient computation and an intuitive way for researchers to explore ethnic relationships among individuals. It can be complementary to parametric approaches in population structure analysis. PMID:18237431
A parametric study of nonlinear seismic response analysis of transmission line structures.
Tian, Li; Wang, Yanming; Yi, Zhenhua; Qian, Hui
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures. PMID:25133215
A Parametric Study of Nonlinear Seismic Response Analysis of Transmission Line Structures
Wang, Yanming; Yi, Zhenhua
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures. PMID:25133215
NASA Astrophysics Data System (ADS)
Cicirello, Alice; Langley, Robin S.
2013-04-01
An existing hybrid finite element (FE)/statistical energy analysis (SEA) approach to the analysis of the mid- and high frequency vibrations of a complex built-up system is extended here to a wider class of uncertainty modeling. In the original approach, the constituent parts of the system are considered to be either deterministic, and modeled using FE, or highly random, and modeled using SEA. A non-parametric model of randomness is employed in the SEA components, based on diffuse wave theory and the Gaussian Orthogonal Ensemble (GOE), and this enables the mean and variance of second order quantities such as vibrational energy and response cross-spectra to be predicted. In the present work the assumption that the FE components are deterministic is relaxed by the introduction of a parametric model of uncertainty in these components. The parametric uncertainty may be modeled either probabilistically, or by using a non-probabilistic approach such as interval analysis, and it is shown how these descriptions can be combined with the non-parametric uncertainty in the SEA subsystems to yield an overall assessment of the performance of the system. The method is illustrated by application to an example built-up plate system which has random properties, and benchmark comparisons are made with full Monte Carlo simulations.
Haque, Md Mazharul; Washington, Simon
2014-01-01
The use of mobile phones while driving is more prevalent among young drivers-a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q advanced driving simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver's peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21-26 years old and split evenly by gender. Drivers' reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver's age, license type (provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded. PMID:26847949
Parametric analysis of synthetic aperture radar data for the study of forest stand characteristics
NASA Technical Reports Server (NTRS)
Wu, Shih-Tseng
1988-01-01
A parametric analysis of a Gulf Coast forest stand was performed using multipolarization, multipath airborne SAR data, and forest plot properties. Allometric equations were used to compute the biomass and basal area for the test plots. A multiple regression analysis with stepwise selection of independent variables was performed. It is found that forest stand characteristics such as biomass, basal area, and average tree height are correlated with SAR data.
Recent results on parametric analysis of differential Omega error
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.; Piserchia, P. V.
1974-01-01
Previous tests of the differential Omega concept and an analysis of the characteristics of VLF propagation make it possible to delineate various factors which might contribute to the variation of errors in phase measurements at an Omega receiver site. An experimental investigation is conducted to determine the effect of each of a number of parameters on differential Omega accuracy and to develop prediction equations. The differential Omega error form is considered and preliminary results are presented of the regression analysis used to study differential error.
Multilevel Latent Class Analysis: Parametric and Nonparametric Models
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2014-01-01
Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…
Bifurcation analysis of parametrically excited bipolar disorder model
NASA Astrophysics Data System (ADS)
Nana, Laurent
2009-02-01
Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.
Parametric analysis of lunar resources for space energy systems
NASA Astrophysics Data System (ADS)
Woodcock, Gordon R.
The possible use of lunar resource in the construction of solar power satellites (SPS) to provide energy for use on earth is discussed. The space transportation and operational aspects of the SPS program are compared to other energy concepts. Cost/benefit analysis are used to study the advantages of using lunar oxygen for the SPS program and producing helium-3 on the moon. Options for lunar surface power are considered and the economic benefits of using lunar resources are examined.
Pouthier, Vincent
2010-09-29
A detailed analysis is performed to show that the second order time-convolutionless master equation fails to describe the exciton-phonon dynamics in a finite size lattice. To proceed, special attention is paid to characterizing the coherences of the exciton reduced density matrix. These specific elements measure the ability of the exciton to develop superimpositions involving the vacuum and the one-exciton states. It is shown that the coherences behave as wavefunctions whose dynamics is governed by a time-dependent effective Hamiltonian defined in terms of the so-called time-dependent relaxation operator. Due to the confinement, quantum recurrences provide to the relaxation operator an almost periodic nature, so the master equation reduces to a linear system of differential equations with almost periodic coefficients. We show that, in accordance with the Floquet theory, unstable solutions emerge due to parametric resonances involving specific frequencies of the relaxation operator and specific excitonic eigenfrequencies. These resonances give rise to an unphysical exponential growth of the coherences, indicating the breakdown of the second order master equation. PMID:21386551
A parametric analysis of the growing CFHB (Wistar) rat.
Pullen, A H
1976-01-01
Measurements of body weight and body, tail and hindlimb lengths were made at various times during the postnatal development of the CFHB-Wistar rat using simple techniques which avoided unnecessary stress being applied to the animal. All the graphical relationships studied using these measurements indicated that growth was divided into two phases which meet at 16 days. This was particularly noticeable when either body length or hindlimb length was plotted against either age or body weight. Body weight appeared to be a better indication of maturity than time; and linear relationships were found when measurements of body lengths were plotted against the square root of body weight. Curvilinear graphs were obtained when body weight was plotted on alternative scales. Results suggest that prior to 16 days metabolism is probably directed more towards attaining sufficient maturity to enable the animal to survive independently, than to increasing its stature; but after 16 days this situation is reversed. PMID:931783
NASA Astrophysics Data System (ADS)
Wei, Dang; Qing, Liao; Peng-Cheng, Mao; Hong-Bing, Fu; Yu-Xiang, Weng
2016-05-01
Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spectrum correction function. These advantages should benefit the study of coherent emission, such as measurement of lasing dynamics. In this letter, the FNOPAS was used to trace the lasing process in Rhodamine 6G (R6G) solution and organic semiconductor nano-wires. High-quality transient emission spectra and lasing dynamic traces were acquired, which demonstrates the applicability of FNOPAS in the study of lasing dynamics. Our work extends the application scope of the FNOPAS technique. Project supported by the National Natural Science Foundation of China (Grant Nos. 20925313 and 21503066), the Innovation Program of Chinese Academy of Sciences (Grant No. KJCX2-YW-W25), the Postdoctoral Project of Hebei University, China, and the Project of Science and Technology Bureau of Baoding City, China (Grant No. 15ZG029).
Parametric analysis and testing of an electrorheological fluid damper
NASA Astrophysics Data System (ADS)
Lindler, Jason E.; Wereley, Norman M.
1999-06-01
This study seeks to validate a predictive damper analysis, based on an idealized Bingham plastic shear flow mechanism, which incorporates leakage effects in an electrorheological (ER) damper. The ER bypass damper operates by a piston head pushing ER fluid out of a hydraulic cylinder and through an ER fluid bypass. The pressure to force ER fluid through the bypass produces the majority of the device's damping. The ER bypass is composed of an annulus formed from two concentric aluminum tubes. The application of a voltage potential between the aluminum tubes creates an electric field in the annulus that increases the yield stress of the ER fluid. The yield stress modifies the velocity profile of the fluid in the annulus and augments the damping coefficient of the device. The ER fluid damper contains a controlled amount of leakage around the piston head. The leakage allows ER fluid to flow from one side of the piston head to the opposite side without passing through the ER bypass. In this analysis, the leakage damping coefficient with incorporated leakage effects, predict the amount of energy dissipated for a complete cycle of the piston rod. Measured force verses displacement cycles for multiple frequencies and electric fields validate the ability of the non-dimensional groups and the leakage damping coefficient to predict the damping levels for an ER bypass damper with leakage.
Aerodynamic canard/wing parametric analysis for general aviation applications
NASA Technical Reports Server (NTRS)
Keith, M. W.; Selberg, B. P.
1984-01-01
Vortex panel and vortex lattice methods have been utilized in an analytic study to determine the two- and three-dimensional aerodynamic behavior of canard and wing configurations. The purpose was to generate data useful for the design of general aviation canard aircraft. Essentially no two-dimensional coupling was encountered and the vertical distance between the lifting surfaces was found to be the main contributor to interference effects of the three-dimensional analysis. All canard configurations were less efficient than a forward wing with an aft horizontal tail, but were less sensitive to off-optimum division of total lift between the two surfaces, such that trim drag could be less for canard configurations. For designing a general aviation canard aircraft, results point toward large horizontal and vertical distance between the canard and wing, a large wing-to-canard area ratio, and with the canard at a low incidence angle relative to the wing.
Complexity in parametric Bose-Hubbard Hamiltonians and structural analysis of eigenstates
Hiller, Moritz; Kottos, Tsampikos; Geisel, T.
2006-06-15
We consider a family of chaotic Bose-Hubbard Hamiltonians parametrized by the coupling strength k between neighboring sites. As k increases the eigenstates undergo changes, reflected in the structure of the local density of states. We analyze these changes, both numerically and analytically, using perturbative and semiclassical methods. Although our focus is on the quantum trimer, the presented methodology is applicable for the analysis of longer lattices as well.
Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics
Tsourtis, Anastasios; Pantazis, Yannis Katsoulakis, Markos A.; Harmandaris, Vagelis
2015-07-07
In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.
Parametric analysis of thermal stratification during the Monju turbine trip test
Sofu, T.
2012-07-01
CFD-based simulation techniques are evaluated using a simplified symmetric Monju model to study multi-dimensional mixing and heat transfer in the upper plenum during a turbine trip test. When the test starts and core outlet temperatures drop due to reactor shutdown, the cooler sodium is trapped near the bottom of the vessel and the hotter (less dense) primary sodium at the higher elevations stays largely stagnant for an extended period of time inhibiting natural circulation. However, the secondary flow through a set of holes on the inner barrel bypasses the thermally stratified region as a shorter path to the intermediate heat exchanger and improves the natural circulation flow rate to cool the core. The calculations with strict adherence to benchmark specifications predict a much shorter duration for thermal stratification in the upper plenum than the experimental data indicates. In this paper, the results of a parametric analysis are presented to address this discrepancy. Specifically, the role of the holes on the inner barrel is reassessed in terms of their ability to provide larger by-pass flow. Assuming inner barrel holes with rounded edge produces results more consistent with the experiments. (authors)
Parametric analysis of performance and design characteristics for advanced earth-to-orbit shuttles
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.; Strack, W. C.; Padrutt, J. A.
1972-01-01
Performance, trajectory, and design characteristics are presented for (1) a single-stage shuttle with a single advanced rocket engine, (2) a single-stage shuttle with an initial parallel chemical engine and advanced engine burn followed by an advanced engine sustainer burn, (3) a single-stage shuttle with an initial chemical engine burn followed by an advanced engine burn, and (4) a two-stage shuttle with a chemical propulsion booster stage and an advanced propulsion upper stage. The ascent trajectory profile includes a brief initial vertical rise; zero-lift flight through the sensible atmosphere; variational steering into an 83-kilometer by 185-kilometer intermediate orbit; and a fixed, 460-meter per second allowance for subsequent maneuvers. Results are given in terms of burnout mass fractions (including structure and payload), trajectory profiles, propellant loadings, and burn times. These results are generated with a trajectory analysis that includes a parametric variation of the specific impulse from 800 to 3000 seconds and the specific engine weight from 0 to 1.0.
Parametric analysis of the end face engagement worm gear
NASA Astrophysics Data System (ADS)
Deng, Xingqiao; Wang, Jueling; Wang, Jinge; Chen, Shouan; Yang, Jie
2015-11-01
A novel specific type of worm drive, so-called end face engagement worm gear(EFEWD), is originally presented to minimize or overcome the gear backlash. Different factors, including the three different types, contact curves, tooth profile, lubrication angle and the induced normal curvature are taken into account to investigate the meshing characteristics and create the profile of a novel specific type of worm drive through mathematical models and theoretical analysis. The tooth of the worm wheel is very specific with the sine-shaped tooth which is located at the alveolus of the worm and the tooth profile of a worm is generated by the meshing movement of the worm wheel with the sine-shaped tooth, but just the end face of the worm(with three different typical meshing types) is adapted to meshing, and therefore an extraordinary manufacturing methods is used to generate the profile of the end face engagement worm. The research results indicates that the bearing contacts of the generated conjugate hourglass worm gear set are in line contacts, with certain advantages of no-backlash, high precision and high operating efficiency over other gears and gear systems besides the end face engagement worm gear drive may improve bearing contact, reduce the level of transmission errors and lessen the sensitivity to errors of alignment. Also, the end face engagement worm can be easily made with superior meshing and lubrication performance compared with the conventional techniques. In particular, the meshing and lubrication performance of the end face engagement worm gear by using the end face to meshing can be increased over 10% and 7%, respectively. This investigate is expect to provide a new insight on the design of the future no-backlash worm drive for industry.
Parametric cost analysis of a HYLIFE-II power plant
Bieri, R.L. Massachusetts Inst. of Tech., Cambridge, MA )
1990-10-04
The SAFIRE (Systems Analysis for ICF Reactor Economics) code was adapted to model a power plant using a HYLIFE-II reactor chamber. The code was then used to examine the dependence of the plant capital costs and busbar cost of electricity (COE) on a variety of design parameters (type of driver, chamber repetition rate, and net electric power). The results show the most attractive operating space for each set of driver/target assumptions and quantify the benefits of improvements in key design parameters. The base case plant was a 1,000 MWe plant containing a reactor vessel driven by an induction linac heavy ion accelerator run at 7.3 Hz with a driver energy of 5 MJ and a target yield of 370 MJ. The total direct cost for this plant was 2,800 M$ (where all $ in this paper are 1988$s), and the COE was 9 {cents}/KW*hour. The COE and total capital costs for the base plant assumptions for a 1,000 MWe plant are approximately independent of chosen repetition rate for all repetition rates between 4 and 10 Hz. For comparison, the COE for a coal or future fission plant would be 4.5--5.5 {cents}/KW*hour. The COE for a 1,000 MWe plant could be reduced to 7.6 {cents}/KW*hour by using advanced targets and could be cut to 6.8 {cents}/KW*hour with conventional targets if the driver cost could be cut in half. There is a large economy of scale with heavy ion driven ICF plants; a 5,000 MWe plant with one heavy ion driver and either one or two HYLIFE-II chambers would have a COE of only 4.4 {cents}/KW*hour.
Performance prediction and parametric analysis of two stage stirling cycle cryocooler
NASA Astrophysics Data System (ADS)
Natu, P. V.; Narayankhedkar, K. G.
The lowest temperature that can be achieved inStirling cycle cryocooler is governed by various losses. This paper presents performance prediction of Two Stage Stirling Cryocooler(for 20K as the second stage temperature) by using second order analysis which calculates the ideal refrigerating effect at intermediate and final stage temperatures and the ideal power input. The losses are found out for both the stages to determine the actual refrigerating effects and power input. The results obtained are in good agreement with reported values. The performance of the cryocooler is governed by various operating and geometric parameters. Parametric analysis is carried.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
NASA Astrophysics Data System (ADS)
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
Semi-parametric time-domain quantification of HR-MAS data from prostate tissue
Ratiney, Helene; Albers, Mark J.; Rabeson, Herald; Kurhanewicz, John
2011-01-01
High Resolution – Magic Angle Spinning (HR-MAS) spectroscopy provides rich biochemical profiles that require accurate quantification to permit biomarker identification and to understand the underlying pathological mechanisms. Meanwhile, quantification of HR-MAS data from prostate tissue samples is challenging due to significant overlap between the resonant peaks, the presence of short T2∗ metabolites such as citrate or polyamines (T2 from 25 to 100 msec) and macromolecules, and variations in chemical shifts and T2∗s within a metabolite’s spin systems. Since existing methods do not address these challenges completely, a new quantification method was developed and optimized for HR-MAS data acquired with an ultra short TE and over 30,000 data points. The proposed method, named HR-QUEST (High Resolution – QUEST), iteratively employs the QUEST time-domain semi-parametric strategy with a new model function that incorporates prior knowledge from whole and subdivided metabolite signals. With these features, HR-QUEST is able to independently fit the chemical shifts and T2∗s of a metabolite’s spin systems, a necessity for HR-MAS data. By using the iterative fitting approach, it is able to account for significant contributions from macromolecules and to handle shorter T2 metabolites, such as citrate and polyamines. After subdividing the necessary metabolite basis signals, the root mean square (RMS) of the residual was reduced by 52% for measured HR-MAS data from prostate tissue. Monte Carlo studies on simulated spectra with varied macromolecular contributions showed that the iterative fitting approach (6 iterations) coupled with inclusion of long T2 macromolecule components in the basis set improve the quality of the fit, as assessed by the reduction of the RMS of the residual and of the RMS error of the metabolite signal estimate, by 27% and 71% respectively. With this optimized configuration, HR-QUEST was applied to measured HR-MAS prostate data and reliably
Axenovich, Tatiana I; Zorkoltseva, Irina V
2012-02-01
Often the quantitative data coming from proteomics and metabolomics studies have irregular distribution with a spike. None of the wide used methods for human QTL mapping are applicable to such traits. Researchers have to reduce the sample, excluding the spike, and analyze only continuous measurements. In this study, we propose a method for the parametric linkage analysis of traits with a spike in the distribution, and a software GADS, which implements this method. Our software includes not only the programs for parametric linkage analysis, but also the program for complex segregation analysis, which allows the estimation of the model parameters used in linkage. We tested our method on the real data about vertical cup-to-disc ratio, the important characteristic of the optic disc associated with glaucoma, in a large pedigree from a Dutch isolated population. Significant linkage signal was identified on chromosome 6 with the help of GADS, whereas the analysis of the normal distributed part of the sample demonstrated only a suggestive linkage peak on this chromosome. The software GADS is freely available at http://mga.bionet.nsc.ru/soft/index.html. PMID:22340440
Parametric analysis of 2D guided-wave photonic band gap structures
NASA Astrophysics Data System (ADS)
Ciminelli, C.; Peluso, F.; Armenise, M. N.
2005-11-01
The parametric analysis of the electromagnetic properties of 2D guided wave photonic band gap structures is reported with the aim of providing a valid tool for the optimal design. The modelling approach is based on the Bloch-Floquet method. Different lattice configurations and geometrical parameters are considered. An optimum value for the ratio between the hole (or rod) radius and the lattice constant does exist and the calculation demonstrated that it is almost independent from the etching depth, only depending on the lattice type. The results are suitable for the design optimisation of photonic crystal reflectors to be used in integrated optical devices.
Parametric analysis of 2D guided-wave photonic band gap structures.
Ciminelli, C; Peluso, F; Armenise, M
2005-11-28
The parametric analysis of the electromagnetic properties of 2D guided wave photonic band gap structures is reported with the aim of providing a valid tool for the optimal design. The modelling approach is based on the Bloch-Floquet method. Different lattice configurations and geometrical parameters are considered. An optimum value for the ratio between the hole (or rod) radius and the lattice constant does exist and the calculation demonstrated that it is almost independent from the etching depth, only depending on the lattice type. The results are suitable for the design optimisation of photonic crystal reflectors to be used in integrated optical devices. PMID:19503180
NASA Astrophysics Data System (ADS)
Brahim, S.; Bodnar et, J. L.; Grossel, P.
2010-03-01
The aim of this work is to approach in an experimental way, the possibilities of diffusivity thermal measurement, under less energy constraints, offered by front face random photothermal radiometry associated to a parametric analysis. First, we present the principle of the random method. Then, we present the experimental device SAMMIR used in our study. In a third stage, we present the studied sample, the experimental conditions selected and the model developed for the study. We show finally, using the experimental study of a sample of nylon 6.6 that the photothermal method allows, in a particular case, a good approximation of the thermal diffusivity parameter.
NASA Astrophysics Data System (ADS)
Lobach, I.; Benediktovitch, A.
2016-07-01
The possibility of quantitative texture analysis by means of parametric x-ray radiation (PXR) from relativistic electrons with Lorentz factor γ > 50MeV in a polycrystal is considered theoretically. In the case of rather smooth orientation distribution function (ODF) and large detector (θD >> 1/γ) the universal relation between ODF and intensity distribution is presented. It is shown that if ODF is independent on one from Euler angles, then the texture is fully determined by angular intensity distribution. Application of the method to the simulated data shows the stability of the proposed algorithm.
A Parametric Analysis of Solidification in Y(Fe,Ni,Cr)-Nb-C Alloys
DuPont, J.N.; Robino, C.V.
1999-02-22
A parametric analysis is presented which summarizes the amount of total ({gamma}/NbC + {gamma}/Laves) and individual {gamma}/NbC and {gamma}/Laves constituents which form during solidification of {gamma}{sub (Fe,Ni,Cr)} alloys with variations in nominal Nb and C contents. Calculated results are presented for Fe base alloys and Ni base alloys. The results provide a quantitative rationale for understanding the relation between alloy composition and solidification microstructures and should provide useful insight into commercial alloys of similar composition.
Lau, Bryan; Cole, Stephen R.; Gange, Stephen J.
2010-01-01
In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event. PMID:21337360
An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1996-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Performance evaluation and parametric analysis on cantilevered ramp injector in supersonic flows
NASA Astrophysics Data System (ADS)
Huang, Wei; Li, Shi-bin; Yan, Li; Wang, Zhen-guo
2013-03-01
The cantilevered ramp injector is one of the most promising candidates for the mixing enhancement between the fuel and the supersonic air, and its parametric analysis has drawn an increasing attention of researchers. The flow field characteristics and the drag force of the cantilevered ramp injector in the supersonic flow with the freestream Mach number 2.0 have been investigated numerically, and the predicted injectant mole fraction and static pressure profiles have been compared with the available experimental data in the open literature. At the same time, the grid independency analysis has been performed by using the coarse, the moderate and the refined grid scales, and the influence of the turbulence model on the flow field of the cantilevered ramp injector has been carried on as well. Further, the effects of the swept angle, the ramp angle and the length of the step on the performance of the cantilevered ramp injector have been discussed subsequently. The obtained results show that the grid scale has only a slight impact on the flow field of the cantilevered ramp injector except in the region near the fuel injector, and the predicted results show reasonable agreement with the experimental data. Additionally, the turbulence model makes a slight difference to the numerical results, and the results obtained by the RNG k-ɛ and SST k-ω turbulence models are almost the same. The swept angle and the ramp angle have the same impact on the performance of the cantilevered ramp injector, and the kidney-shaped plume is formed with shorter distance with the increase of the swept and ramp angles. At the same time, the shape of the injectant mole fraction contour at X/H=6 goes through a transition from a peach-shaped plume to a kidney-shaped plume, and the cantilevered ramp injector with larger swept and ramp angles has the higher mixing efficiency and the larger drag force. The length of the step has only a slight impact on the drag force performance of the cantilevered
Generative pulsar timing analysis
NASA Astrophysics Data System (ADS)
Lentati, L.; Alexander, P.; Hobson, M. P.
2015-03-01
A new Bayesian method for the analysis of folded pulsar timing data is presented that allows for the simultaneous evaluation of evolution in the pulse profile in either frequency or time, along with the timing model and additional stochastic processes such as red spin noise, or dispersion measure variations. We model the pulse profiles using `shapelets' - a complete orthonormal set of basis functions that allow us to recreate any physical profile shape. Any evolution in the profiles can then be described as either an arbitrary number of independent profiles, or using some functional form. We perform simulations to compare this approach with established methods for pulsar timing analysis, and to demonstrate model selection between different evolutionary scenarios using the Bayesian evidence. The simplicity of our method allows for many possible extensions, such as including models for correlated noise in the pulse profile, or broadening of the pulse profiles due to scattering. As such, while it is a marked departure from standard pulsar timing analysis methods, it has clear applications for both new and current data sets, such as those from the European Pulsar Timing Array and International Pulsar Timing Array.
Time evolution of parametric instability in large-scale gravitational-wave interferometers
NASA Astrophysics Data System (ADS)
Danilishin, Stefan L.; Vyatchanin, Sergey P.; Blair, David G.; Li, Ju; Zhao, Chunnong
2014-12-01
We present a study of three-mode parametric instability in large-scale gravitational-wave detectors. Previous work used a linearized model to study the onset of instability. This paper presents a nonlinear study of this phenomenon, which shows that the initial stage of an exponential rise of the amplitudes of a higher-order optical mode and the mechanical internal mode of the mirror is followed by a saturation phase, in which all three participating modes reach a new equilibrium state with constant oscillation amplitudes. Results suggest that stable operation of interferometers may be possible in the presence of such instabilities, thereby simplifying the task of suppression.
Pouillot, Régis; Lubran, Meryl B; Cates, Sheryl C; Dennis, Sherri
2010-02-01
Home refrigeration temperatures and product storage times are important factors for controlling the growth of Listeria monocytogenes in refrigerated ready-to-eat foods. In 2005, RTI International, in collaboration with Tennessee State University and Kansas State University, conducted a national survey of U.S. adults to characterize consumers' home storage and refrigeration practices for 10 different categories of refrigerated ready-to-eat foods. No distributions of storage time or refrigeration temperature were presented in any of the resulting publications. This study used classical parametric survival modeling to derive parametric distributions from the RTI International storage practices data set. Depending on the food category, variability in product storage times was best modeled using either exponential or Weibull distributions. The shape and scale of the distributions varied greatly depending on the food category. Moreover, the results indicated that consumers tend to keep a product that is packaged by a manufacturer for a longer period of time than a product that is packaged at retail. Refrigeration temperatures were comparable to those previously reported, with the variability in temperatures best fit using a Laplace distribution, as an alternative to the empirical distribution. In contrast to previous research, limited support was found for a correlation between storage time and temperature. The distributions provided in this study can be used to better model consumer behavior in future risk assessments. PMID:20132677
A bifurcation analysis of boiling water reactor on large domain of parametric spaces
NASA Astrophysics Data System (ADS)
Pandey, Vikas; Singh, Suneet
2016-09-01
The boiling water reactors (BWRs) are inherently nonlinear physical system, as any other physical system. The reactivity feedback, which is caused by both moderator density and temperature, allows several effects reflecting the nonlinear behavior of the system. Stability analyses of BWR is done with a simplified, reduced order model, which couples point reactor kinetics with thermal hydraulics of the reactor core. The linear stability analysis of the BWR for steady states shows that at a critical value of bifurcation parameter (i.e. feedback gain), Hopf bifurcation occurs. These stable and unstable domains of parametric spaces cannot be predicted by linear stability analysis because the stability of system does not include only stability of the steady states. The stability of other dynamics of the system such as limit cycles must be included in study of stability. The nonlinear stability analysis (i.e. bifurcation analysis) becomes an indispensable component of stability analysis in this scenario. Hopf bifurcation, which occur with one free parameter, is studied here and it formulates birth of limit cycles. The excitation of these limit cycles makes the system bistable in the case of subcritical bifurcation whereas stable limit cycles continues in an unstable region for supercritical bifurcation. The distinction between subcritical and supercritical Hopf is done by two parameter analysis (i.e. codimension-2 bifurcation). In this scenario, Generalized Hopf bifurcation (GH) takes place, which separates sub and supercritical Hopf bifurcation. The various types of bifurcation such as limit point bifurcation of limit cycle (LPC), period doubling bifurcation of limit cycles (PD) and Neimark-Sacker bifurcation of limit cycles (NS) have been identified with the Floquet multipliers. The LPC manifests itself as the region of bistability whereas chaotic region exist because of cascading of PD. This region of bistability and chaotic solutions are drawn on the various
Babalola, Omotunde M; Bonassar, Lawrence J
2009-06-01
While mechanical stimulation of cells seeded within scaffolds is widely thought to be beneficial, the amount of benefit observed is highly variable between experimental systems. Although studies have investigated specific experimental loading protocols thought to be advantageous for cartilage growth, less is known about the physical stimuli (e.g., pressures, velocities, and local strains) cells experience during these experiments. This study used results of a literature survey, which looked for patterns in the efficacy of mechanical stimulation of chondrocyte seeded scaffolds, to inform the modeling of spatial patterns of physical stimuli present in mechanically stimulated constructs. The literature survey revealed a large variation in conditions used in mechanical loading studies, with a peak to peak strain of 10% (i.e., the maximum amount of deformation experienced by the scaffold) at 1 Hz on agarose scaffolds being the most frequently studied parameters and scaffold. This loading frequency was then used as the basis for simulation in the finite element analyses. 2D axisymmetric finite element models of 2x4 mm2 scaffolds with 360 modulus/permeability combinations were constructed using COMSOL MULTIPHYSICS software. A time dependent coupled pore pressure/effective stress analysis was used to model fluid/solid interactions in the scaffolds upon loading. Loading was simulated using an impermeable frictionless loader on the top boundary with fluid and solid displacement confined to the radial axis. As expected, all scaffold materials exhibited classic poro-elastic behavior having pressurized cores with low fluid flow and edges with high radial fluid velocities. Under the simulation parameters of this study, PEG scaffolds had the highest pressure and radial fluid velocity but also the lowest shear stress and radial strain. Chitosan and KLD-12 simulated scaffold materials had the lowest radial strains and fluid velocities, with collagen scaffolds having the lowest
NASA Astrophysics Data System (ADS)
Dodonov, V. V.; Valverde, C.; Souza, L. S.; Baseia, B.
2011-10-01
The exact Wigner function of a parametrically excited quantum oscillator in a phase-sensitive amplifying/attenuating reservoir is found for initial even/odd coherent states. Studying the evolution of negativity of the Wigner function we show the difference between the “initial positivization time” (IPT), which is inversely proportional to the square of the initial size of the superposition, and the “final positivization time” (FPT), which does not depend on this size. Both these times can be made arbitrarily long in maximally squeezed high-temperature reservoirs. Besides, we find the conditions when some (small) squeezing can exist even after the Wigner function becomes totally positive.
NASA Technical Reports Server (NTRS)
Jeffries, K. S.; Renz, D. D.
1984-01-01
A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.
A parametric analysis of performance characteristics of satellite-borne multiple-beam antennas
NASA Technical Reports Server (NTRS)
Salmasi, A. B.
1980-01-01
An analytical and empirical model is presented for parametric study of multiple beam antenna frequency reuse capacity and interbeam isolation. Two types of reflector antennas, the axisymmetric parabolic and the offset-parabolic reflectors, are utilized to demonstrate the model. The parameters of the model are introduced and their limitations are discussed in the context of parabolic reflector antennas. The model, however, is not restricted to analysis of reflector antenna performance. Results of the analyses are covered in two tables. The model parameters, objectives, and descriptions are given, multiple-beam antenna frequency reuse capacity and interbeam isolation analysis of the two types of reflectors are discussed as well as future developments of the program model.
Parametric analysis of a cylindrical negative Poisson’s ratio structure
NASA Astrophysics Data System (ADS)
Wang, Yuanlong; Wang, Liangmo; Ma, Zheng-dong; Wang, Tao
2016-03-01
Much research related to negative Poisson’s ratio (NPR), or auxetic, structures is emerging these days. Several types of 3D NPR structure have been proposed and studied, but almost all of them had cuboid shapes, which were not suitable for certain engineering applications. In this paper, a cylindrical NPR structure was developed and researched. It was expected to be utilized in springs, bumpers, dampers and other similar applications. For the purpose of parametric analysis, a method of parametric modeling of cylindrical NPR structures was developed using MATLAB scripts. The scripts can automatically establish finite element models, invoke ABAQUS, read results etc. Subsequently the influences of structural parameters, including number of cells, number of layers and layer heights, on the uniaxial compression behavior of cylinder NPR structures were researched. This led to the conclusion that the stiffness of the cylindrical NPR structure was enhanced on increasing the number of cells and reducing the effective layer height. Moreover, small numbers of layers resulted in a late transition area of the load-displacement curve from low stiffness to high stiffness. Moreover, the middle contraction regions were more apparent with larger numbers of cells, smaller numbers of layers and smaller effective layer heights. The results indicate that the structural parameters had significant effects on the load-displacement curves and deformed shapes of cylindrical NPR structures. This paper is conducive to the further engineering applications of cylindrical NPR structures.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
NASA Astrophysics Data System (ADS)
Dai, Xiaoqian; Tian, Jie; Chen, Zhe
2010-03-01
Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.
Examining deterrence of adult sex crimes: A semi-parametric intervention time series approach
Park, Jin-Hong; Bandyopadhyay, Dipankar; Letourneau, Elizabeth
2013-01-01
Motivated by recent developments on dimension reduction (DR) techniques for time series data, the association of a general deterrent effect towards South Carolina (SC)’s registration and notification (SORN) policy for preventing sex crimes was examined. Using adult sex crime arrestee data from 1990 to 2005, the the idea of Central Mean Subspace (CMS) is extended to intervention time series analysis (CMS-ITS) to model the sequential intervention effects of 1995 (the year SC’s SORN policy was initially implemented) and 1999 (the year the policy was revised to include online notification) on the time series spectrum. The CMS-ITS model estimation was achieved via kernel smoothing techniques, and compared to interrupted auto-regressive integrated time series (ARIMA) models. Simulation studies and application to the real data underscores our model’s ability towards achieving parsimony, and to detect intervention effects not earlier determined via traditional ARIMA models. From a public health perspective, findings from this study draw attention to the potential general deterrent effects of SC’s SORN policy. These findings are considered in light of the overall body of research on sex crime arrestee registration and notification policies, which remain controversial. PMID:24795489
Broadband fiber-optical parametric amplification for ultrafast time-stretch imaging at 1.0 μm.
Wei, Xiaoming; Lau, Andy K S; Xu, Yiqing; Zhang, Chi; Mussot, Arnaud; Kudlinski, Alexandre; Tsia, Kevin K; Wong, Kenneth K Y
2014-10-15
We demonstrate a broadband all-fiber-optical parametric amplifier for ultrafast time-stretch imaging at 1.0 μm, featured by its compact design, alignment-free, high efficiency, and flexible gain spectrum through fiber nonlinearity- and dispersion-engineering: specifically on a dispersion-stabilized photonic-crystal fiber (PCF) to achieve a net gain over 20 THz (75 nm) and a highest gain of ~6000 (37.5 dB). Another unique feature of the parametric amplifier, over other optical amplifiers, is the coherent generation of a synchronized signal replica (called idler) that can be exploited to offer an extra 3-dB gain by optically superposing the signal and idler. It further enhances signal contrast and temporal stability. For proof-of-concept purpose, ultrahigh speed and diffraction-limited time-stretch microscopy is demonstrated with a single-shot line-scan rate of 13 MHz based on the dual-band (signal and idler) detection. Our scheme can be extended to other established bioimaging modalities, such as optical coherence tomography, near infrared fluorescence, and photoacoustic imaging, where weak signal detection at high speed is required. PMID:25361137
Ethanol production by enzymatic hydrolysis: parametric analysis of a base-case process
Isaacs, S.H.
1984-05-01
A base-case flowsheet for an enzymatic hydrolysis process is presented. Included is a parametric sensitivity analysis to identify key research issues and an assessment of this technology. The plant discussed is a large-scale facility, producing 50 million gallons of ethanol per year. The plant design is based on the process originally conceived by the US National Army Command and consists of these process steps: pretreatment; enzyme production; enzyme hydrolysis; fermentation; and distillation. The base-case design parameters are based on recent laboratory data from Lawrence Berkeley Laboratories and the University of California at Berkeley. The selling price of ethanol is used to compare variations in the base-case operating parameters, which include hydrolysis efficiencies, capital costs, enzyme production efficiencies, and enzyme recycle. 28 references, 38 figures, 8 tables.
Takeuchi, E.S.; Size, P.J.
1994-12-31
The Taguchi Method of Experimental Design was utilized to parametrically assess the effects of four variables in cell configuration on performance of spirally wound lithium oxyhalide D cells. This approach utilizes fractional factorial designs requiring a fraction of the number of experiments required of full factorial experiments. The Taguchi approach utilizes ANOVA analysis for calculating the percent contribution of each factor to battery performance as well as main effects of each factor. The four factors investigated in this study were the electrolyte type, the electrolyte concentration, the depolarizer type, and the mechanical cell design. The effects of these four factors on 1A constant current discharge, low temperature discharge, start-up, and shelf-life were evaluated. The factor having the most significant effect on cell performance was the electrolyte type.
NASA Astrophysics Data System (ADS)
Kumar, Deepak; Kumar, Vivek; Singh, V. P.
2009-07-01
In the present paper, the effects of cake thickness and time on the efficiency of brown stock washer of the paper mill are studied by using mathematical model of pulp washing for the species of sodium and lignin ions. The mechanism of the diffusion- dispersion washing of the bed of the pulp fibers is mathematically modeled by the basic material balance and adsorption isotherm is used to describe the equilibrium between the concentration of the solute in the liquor and concentration of the solute on the fibers. To study the parametric effect, numerical solutions of the axial domain of the system governed by partial differential equations (transport and isotherm equations) for different boundary conditions are obtained by the "pdepe" solver in MATLAB source code. The effects of both the parameters are shown by three dimensional graphical representation as well as concentration profiles.
Parametric analysis of the thermal effects on the divertor in tokamaks during plasma disruptions
Bruhn, M.L.
1988-04-01
Plasma disruptions are an ever present danger to the plasma-facing components in today's tokamak fusion reactors. This threat results from our lack of understanding and limited ability to control this complex phenomenon. In particular, severe energy deposition occurs on the divertor component of the double-null configured tokamak reactor during such disruptions. A hybrid computational model developed to estimate and graphically illustrate global thermal effects of disruptions on the divertor plates is described in detail. The quasi-two-dimensional computer code, TADDPAK (Thermal Analysis Divertor during Disruptions PAcKage), is used to conduct parametric analysis for the TIBER II Tokamak Engineering Test Reactor Design. The dependence of these thermal effects on divertor material choice, disruption pulse length, disruption pulse shape, and the characteristic thickness of the plasma scrape-off layer is investigated for this reactor design. Results and conclusions from this analysis are presented. Improvements to this model and issues that require further investigation are discussed. Cursory analysis for ITER (International Thermonuclear Experimental Reactor) is also presented in the appendix. 75 refs., 49 figs., 10 tabs.
Global analysis and parametric dependencies for potential unintended hydrogen-fuel releases
Harstad, Kenneth; Bellan, Josette
2006-01-01
Global, simplified analyses of gaseous-hydrogen releases from a high-pressure vessel and liquid-hydrogen pools are conducted for two purposes: (1) establishing order-of-magnitude values of characteristic times and (2) determining parametric dependencies of these characteristic times on the physical properties of the configuration and on the thermophysical properties of hydrogen. According to the ratio of the characteristic release time to the characteristic mixing time, two limiting configurations are identified: (1) a rich cloud exists when this ratio is much smaller than unity, and (2) a jet exists when this ratio is much larger than unity. In all cases, it is found that the characteristic release time is proportional to the total released mass and inversely proportional to a characteristic area. The approximate size, convection velocity, and circulation time of unconfined burning-cloud releases scale with the cloud mass at powers 1/3, 1/6, and 1/6, respectively, multiplied by an appropriately dimensional constant; the influence of cross flow can only be important if its velocity exceeds that of internal convection. It is found that the fireball lifetime is approximately the maximum of the release time and thrice the convection-associated characteristic time. Transition from deflagration to detonation can occur only if the size of unconfined clouds exceeds by a factor of O(10) that of a characteristic detonation cell, which ranges from 0.015 m under stoichiometric conditions to approximately 1 m under extreme rich/lean conditions. For confined vapor pockets, transition occurs only for pocket sizes larger than the cell size. In jets, the release time is inversely proportional to the initial vessel pressure and has a square root dependence on the vessel temperature. Jet velocities are a factor of 10 larger than convective velocities in fireballs and combustion is possible only in the subsonic, downstream region where entrainment may occur.
A Bayesian Semi-parametric Approach for the Differential Analysis of Sequence Counts Data.
Guindani, Michele; Sepúlveda, Nuno; Paulino, Carlos Daniel; Müller, Peter
2014-04-01
Data obtained using modern sequencing technologies are often summarized by recording the frequencies of observed sequences. Examples include the analysis of T cell counts in immunological research and studies of gene expression based on counts of RNA fragments. In both cases the items being counted are sequences, of proteins and base pairs, respectively. The resulting sequence-abundance distribution is usually characterized by overdispersion. We propose a Bayesian semi-parametric approach to implement inference for such data. Besides modeling the overdispersion, the approach takes also into account two related sources of bias that are usually associated with sequence counts data: some sequence types may not be recorded during the experiment and the total count may differ from one experiment to another. We illustrate our methodology with two data sets, one regarding the analysis of CD4+ T cell counts in healthy and diabetic mice and another data set concerning the comparison of mRNA fragments recorded in a Serial Analysis of Gene Expression (SAGE) experiment with gastrointestinal tissue of healthy and cancer patients. PMID:24833809
NASA Astrophysics Data System (ADS)
Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Kwak, Byung-Joon
2013-07-01
This study aimed to quantitatively analyze data from diffusion tensor imaging (DTI) using statistical parametric mapping (SPM) in patients with brain disorders and to assess its potential utility for analyzing brain function. DTI was obtained by performing 3.0-T magnetic resonance imaging for patients with Alzheimer's disease (AD) and vascular dementia (VD), and the data were analyzed using Matlab-based SPM software. The two-sample t-test was used for error analysis of the location of the activated pixels. We compared regions of white matter where the fractional anisotropy (FA) values were low and the apparent diffusion coefficients (ADCs) were increased. In the AD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right sub-lobar insula, and right occipital lingual gyrus whereas the ADCs were significantly increased in the right inferior frontal gyrus and right middle frontal gyrus. In the VD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right limbic cingulate gyrus, and right sub-lobar caudate tail whereas the ADCs were significantly increased in the left lateral globus pallidus and left medial globus pallidus. In conclusion by using DTI and SPM analysis, we were able to not only determine the structural state of the regions affected by brain disorders but also quantitatively analyze and assess brain function.
Infinitesimal-area 2D radiative analysis using parametric surface representation, through NURBS
Daun, K.J.; Hollands, K.G.T.
1999-07-01
The use of form factors in the treatment of radiant enclosures requires that the radiosity and surface properties be treated as uniform over finite areas. This restriction can be relaxed by applying an infinitesimal-area analysis, where the radiant exchange is taken to be between infinitesimal areas, rather than finite areas. This paper presents a generic infinitesimal-area formulation that can be applied to two-dimensional enclosure problems. (Previous infinitesimal-area analyses have largely been restricted to specific, one-dimensional problems.) Specifically, the paper shows how the analytical expression for the kernel of the integral equation can be obtained without human intervention, once the enclosure surface has been defined parametrically. This can be accomplished by using a computer algebra package or by using NURBS algorithms, which are the industry standard for the geometrical representations used in CAD-CAM codes. Once the kernel has been obtained by this formalism, the 2D integral equation can be set up and solved numerically. The result is a single general-purpose infinitesimal-area analysis code that can proceed from surface specification to solution. The authors have implemented this 2D code and tested it on 1D problems, whose solutions have been given in the literature, obtaining agreement commensurate with the accuracy of the published solutions.
NASA Astrophysics Data System (ADS)
Lee, Liming; Kou, Kit Ian; Zhang, Wentao; Liang, Jinling; Liu, Yang
2016-07-01
In this paper, we consider finite-time control problems for linear multi-agent systems subject to exogenous constant disturbances and impulses. Some sufficient conditions are obtained to ensure the finite-time boundedness of the multi-agent systems, which could be then reduced to a feasibility problem involving linear matrix inequalities. Numerical examples are given to illustrate the results.
Non-parametric seismic hazard analysis in the presence of incomplete data
NASA Astrophysics Data System (ADS)
Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush
2016-07-01
The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.
NASA Technical Reports Server (NTRS)
Mizukami, M.; Saunders, J. D.
1995-01-01
The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.
NASA Astrophysics Data System (ADS)
Moradian, Zabihallah; Einstein, Herbert H.; Ballivy, Gerard
2016-03-01
Determination of the cracking levels during the crack propagation is one of the key challenges in the field of fracture mechanics of rocks. Acoustic emission (AE) is a technique that has been used to detect cracks as they occur across the specimen. Parametric analysis of AE signals and correlating these parameters (e.g., hits and energy) to stress-strain plots of rocks let us detect cracking levels properly. The number of AE hits is related to the number of cracks, and the AE energy is related to magnitude of the cracking event. For a full understanding of the fracture process in brittle rocks, prismatic specimens of granite containing pre-existing flaws have been tested in uniaxial compression tests, and their cracking process was monitored with both AE and high-speed video imaging. In this paper, the characteristics of the AE parameters and the evolution of cracking sequences are analyzed for every cracking level. Based on micro- and macro-crack damage, a classification of cracking levels is introduced. This classification contains eight stages (1) crack closure, (2) linear elastic deformation, (3) micro-crack initiation (white patch initiation), (4) micro-crack growth (stable crack growth), (5) micro-crack coalescence (macro-crack initiation), (6) macro-crack growth (unstable crack growth), (7) macro-crack coalescence and (8) failure.
A Parametric Cycle Analysis of a Separate-Flow Turbofan with Interstage Turbine Burner
NASA Technical Reports Server (NTRS)
Marek, C. J. (Technical Monitor); Liew, K. H.; Urip, E.; Yang, S. L.
2005-01-01
Today's modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. This study focuses on a parametric cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The JTB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, linear relation between high- and low-pressure turbines, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB.
NASA Astrophysics Data System (ADS)
Abot, Jandro L.; Kiyono, César Y.; Thomas, Gilles P.; Silva, Emílio C. N.
2015-07-01
Carbon nanotube (CNT) yarns are micron-size fibers that contain thousands of intertwined CNTs in their cross sections and exhibit piezoresistance characteristics that can be tapped for sensing purposes. Sensor yarns can be integrated into polymeric and composite materials to measure strain through resistance measurements without adding weight or altering the integrity of the host material. This paper includes the details of novel strain gauge sensor configurations comprised of CNT yarn, the numerical modeling of their piezoresistive response, and the parametric analysis schemes that determines the highest sensor sensitivity to mechanical loading. The effect of several sensor configuration parameters are discussed including the inclination and separation of the CNT yarns within the sensor, the mechanical properties of the CNT yarn, the direction and magnitude of the applied mechanical load, and the dimensions and shape of the sensor. The sensor configurations that yield the highest sensitivity are presented and discussed in terms of the mechanical and electrical properties of the CNT yarn. It is shown that strain gauge sensors consisting of CNT yarn are sensitive enough to measure strain, and could exhibit even higher gauge factors than those of metallic foil strain gauges.
NASA Astrophysics Data System (ADS)
Sayadi, Taraneh; Schmid, Peter; Richecoeur, Franck; Durox, Daniel
2014-11-01
Thermo-acoustic systems belong to a class of dynamical systems that are governed by multiple parameters. Changing these parameters alters the response of the dynamical system and causes it to bifurcate. Due to their many applications and potential impact on a variety of combustion systems, there is great interest in devising control strategies to weaken or suppress thermo-acoustic instabilities. However, the system dynamics have to be available in reduced-order form to allow the design of such controllers and their operation in real-time. As the dominant modes and their respective frequencies change with varying the system parameters, the dynamical system needs to be analyzed separately for a set of fixed parameter values, before the dynamics can be linked in parameter-space. This two-step process is not only cumbersome, but also ambiguous when applied to systems operating close to a bifurcation point. Here we propose a parametrized decomposition algorithm which is capable of analyzing dynamical systems as they go through a bifurcation, extracting the dominant modes of the pre- and post-bifurcation regime. The algorithm is applied to a thermo-acoustically oscillating flame and to pressure signals from experiments. A few selected mode are capable of reproducing the dynamics.
NASA Astrophysics Data System (ADS)
Ozden, Ender; Tari, Ilker
2016-02-01
A Polymer Electrolyte Membrane (PEM) fuel cell is numerically investigated both as fresh and as degraded with the help of observed degradation patterns reported in the literature. The fresh fuel cell model is validated and verified with the data from the literature. Modifying the model by varying the parameters affected by degradation, a degraded PEM fuel cell model is created. The degraded fuel cell is parametrically analyzed by using a commercial Computational Fluid Dynamics (CFD) software. The investigated parameters are the membrane equivalent weight, the Catalyst Layer (CL) porosity and viscous resistance, the Gas Diffusion Layer (GDL) porosity and viscous resistance, and the bipolar plate contact resistance. It is shown for the first time that PEM fuel cell overall degradation can be numerically estimated by combining experimental data from degraded individual components. By comparing the simulation results for the fresh and the degraded PEM fuel cells for two years of operation, it is concluded that the effects of overall degradation on cell potential is significant - estimated to be 17% around the operating point of the fuel cell at 0.95 V open circuit voltage and 70 °C operating temperature.
Marmarelis, Vasilis Z.; Shin, Dae C.; Zhang, Yaping; Kautzky-Willer, Alexandra; Pacini, Giovanni; D’Argenio, David Z.
2013-01-01
Background: Modeling studies of the insulin–glucose relationship have mainly utilized parametric models, most notably the minimal model (MM) of glucose disappearance. This article presents results from the comparative analysis of the parametric MM and a nonparametric Laguerre based Volterra Model (LVM) applied to the analysis of insulin modified (IM) intravenous glucose tolerance test (IVGTT) data from a clinical study of gestational diabetes mellitus (GDM). Methods: An IM IVGTT study was performed 8 to 10 weeks postpartum in 125 women who were diagnosed with GDM during their pregnancy [population at risk of developing diabetes (PRD)] and in 39 control women with normal pregnancies (control subjects). The measured plasma glucose and insulin from the IM IVGTT in each group were analyzed via a population analysis approach to estimate the insulin sensitivity parameter of the parametric MM. In the nonparametric LVM analysis, the glucose and insulin data were used to calculate the first-order kernel, from which a diagnostic scalar index representing the integrated effect of insulin on glucose was derived. Results: Both the parametric MM and nonparametric LVM describe the glucose concentration data in each group with good fidelity, with an improved measured versus predicted r2 value for the LVM of 0.99 versus 0.97 for the MM analysis in the PRD. However, application of the respective diagnostic indices of the two methods does result in a different classification of 20% of the individuals in the PRD. Conclusions: It was found that the data based nonparametric LVM revealed additional insights about the manner in which infused insulin affects blood glucose concentration. PMID:23911176
Dismuke, C E; Sena, V
1999-05-01
The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospital-specific and national DRG case-mix rates. Since implementation in 1990, the percentage of each hospital's budget based on hospital specific costs was to decrease, while the percentage based on DRG case-mix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospital-specific cost basis. While all other non-NHS third party payers are currently paying based on DRGs, the adoption of DRG case-mix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a two-stage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992-1994, using both parametric and non-parametric frontier models. We find evidence
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
A parametric analysis of lithospheric imaging by Full-Waveform Inversion of teleseismic body-waves
NASA Astrophysics Data System (ADS)
Beller, Stephen; Monteiller, Vadim; Operto, Stéphane; Nolet, Guust; Virieux, Jean
2015-04-01
With the deployment of dense seismic arrays and the continuous growth of computing facilities, full-waveform inversion (FWI) of teleseismic data has become a method of choice for 3D high-resolution lithospheric imaging. FWI is a local optimization problem that seeks to estimate Earth's elastic properties by iteratively minimizing the misfit function between observed and modeled seismograms. Recent investigations have shown the feasibility of such local inversions by injecting a pre-computed global wavefield at the edges of the lithospheric target. In this study, we present all the methodological ingredients needed for the application of FWI to lithospheric data. The global wavefield, which is computed in an axisymmetric global earth with AxiSEM, is injected in the lithospheric target by the so-called total-field/scattered-field method. The inversion, which is implemented with an adjoint formalism, is performed following a multiscale approach, proceeding hierarchically from low to high frequencies. We further perform a parametric analysis in a realistic model representative of the Western Alps. This analysis mainly focus on the FWI sensitivity to the source characteristics. One key issue is the estimation of the temporal source excitation, as there might be some trade-off between the source estimation and the subsurface update. We also investigate the imprint of the sources repartition on the spatial resolution of the imaging, the FWI sensitivity to the accuracy of the starting model and the effects of considering a complex topography. Seismic modeling in the FWI models allows us to assess which parts of the teleseismic wavefield significantly contribute to the imaging.
NASA Astrophysics Data System (ADS)
Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio
2015-01-01
The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the
Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr
2015-01-13
The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the
NASA Astrophysics Data System (ADS)
Nguyen, C.; Chandra, C. V.
2014-12-01
The separation of radar signatures depicting cloud and drizzle within a pulse radar volume is a fundamental problem whose solution is required to decouple the microphysical and dynamical processes introduced by turbulence. Such a solution would lead to the development of new meteorological products.In this presentation, a method to detect, separate and estimate multiple radar echoes from cloud and drizzle obtained from vertically pointing cloud Doppler spectra is described. In the case when only clouds are present, the Doppler spectrum is symmetrical and is well approximated by a Gaussian. To extract cloud echoes, a parametric maximum likelihood estimator in the time domain is employed using the recorded radar Doppler spectra data. To detect skewness in the radar spectrum, goodness of fit parameters are defined. It is shown that these new detection parameters exhibit a low level sensitivity to poor signal-to-noise ratios and large signal spectrum widths. The proposed method can consequently be applied to signals with shorter integration time; this significantly reduces the impact of small-scale dynamics present in the Doppler spectrum. Additionally, signals near the cloud top and cloud base are used as constraints to optimize the detection and estimation algorithm's performance.The applications of the technique include inference of the vertical air motion and the particle size distribution of the drizzle. The method will be tested on datasets that have been collected by the ARM cloud radars.
NASA Astrophysics Data System (ADS)
Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos
2006-10-01
We use the detrended fluctuation analysis (DFA) and the Grassberger-Proccacia analysis (GP) methods in order to study language characteristics. Despite that we construct our signals using only word lengths or word frequencies, excluding in this way huge amount of information from language, the application of GP analysis indicates that linguistic signals may be considered as the manifestation of a complex system of high dimensionality, different from random signals or systems of low dimensionality such as the Earth climate. The DFA method is additionally able to distinguish a natural language signal from a computer code signal. This last result may be useful in the field of cryptography.
Non-parametric causal assessment in deep-time geological records
NASA Astrophysics Data System (ADS)
Agasøster Haaga, Kristian; Diego, David; Brendryen, Jo; Hannisdal, Bjarte
2016-04-01
The interplay between climate variables and the timing of their feedback mechanisms are typically investigated using fully coupled climate system models. However, as we delve deeper into the geological past, mechanistic process models become increasingly uncertain, making nonparametric approaches more attractive. Here we explore the use of two conceptually different methods for nonparametric causal assessment in palaeoenvironmental archives of the deep past: convergent cross mapping (CCM) and information transfer (IT). These methods have the potential to capture interactions in complex systems even when data are sparse and noisy, which typically characterises geological proxy records. We apply these methods to proxy time series that capture interlinked components of the Earth system at different temporal scales, and quantify both the interaction strengths and the feedback lags between the variables. Our examples include the linkage between the ecological prominence of common planktonic species to oceanographic changes over the last ~65 million years, and global interactions and teleconnections within the climate system during the last ~800,000 years.
NASA Astrophysics Data System (ADS)
Sayadi, Taraneh; Schmid, Peter J.; Richecoeur, Franck; Durox, Daniel
2015-03-01
Dynamic mode decomposition (DMD) belongs to a class of data-driven decomposition techniques, which extracts spatial modes of a constant frequency from a given set of numerical or experimental data. Although the modal shapes and frequencies are a direct product of the decomposition technique, the determination of the respective modal amplitudes is non-unique. In this study, we introduce a new algorithm for defining these amplitudes, which is capable of capturing physical growth/decay rates of the modes within a transient signal and is otherwise not straightforward using the standard DMD algorithm. In addition, a parametric DMD algorithm is introduced for studying dynamical systems going through a bifurcation. The parametric DMD alleviates multiple applications of the DMD decomposition to the system with fixed parametric values by including the bifurcation parameter in the decomposition process. The parametric DMD with amplitude correction is applied to a numerical and experimental data sequence taken from thermo-acoustically unstable systems. Using DMD with amplitude correction, we are able to identify the dominant modes of the transient regime and their respective growth/decay rates leading to the final limit-cycle. In addition, by applying parametrized DMD to images of an oscillating flame, we are able to identify the dominant modes of the bifurcation diagram.
Time- and power-dependent operation of a parametric spin-wave amplifier
Brächer, T.; Heussner, F.; Pirro, P.; Fischer, T.; Geilen, M.; Heinz, B.; Lägel, B.; Serga, A. A.; Hillebrands, B.
2014-12-08
We present the experimental observation of the localized amplification of externally excited, propagating spin waves in a transversely in-plane magnetized Ni{sub 81}Fe{sub 19} magnonic waveguide by means of parallel pumping. By employing microfocussed Brillouin light scattering spectroscopy, we analyze the dependency of the amplification on the applied pumping power and on the delay between the input spin-wave packet and the pumping pulse. We show that there are two different operation regimes: At large pumping powers, the spin-wave packet needs to enter the amplifier before the pumping is switched on in order to be amplified while at low powers the spin-wave packet can arrive at any time during the pumping pulse.
Lucero-Acuña, Armando; Guzmán, Roberto
2015-10-15
A mathematical model of drug release that incorporates the simultaneous contributions of initial burst, nanoparticle degradation-relaxation and diffusion was developed and used to effectively describe the release of a kinase inhibitor and anticancer drug, PHT-427. The encapsulation of this drug into PLGA nanoparticles was performed by following the single emulsion-solvent evaporation technique and the release was determined in phosphate buffer pH 7.4 at 37 °C. The size of nanoparticles was obtained in a range of 162-254 nm. The experimental release profiles showed three well defined phases: an initial fast drug release, followed by a nanoparticle degradation-relaxation slower release and then a diffusion release phase. The effects of the controlled release most relevant parameters such as drug diffusivity, initial burst constant, nanoparticle degradation-relaxation constant, and the time to achieve a maximum rate of drug release were evaluated by a parametrical analysis. The theoretical release studies were corroborated experimentally by evaluating the cytotoxicity effectiveness of the inhibitor AKT/PDK1 loaded nanoparticles over BxPC-3 pancreatic cancer cells in vitro. These studies show that the encapsulated inhibitor AKT/PDK1 in the nanoparticles is more accessible and thus more effective when compared with the drug alone, indicating their potential use in chemotherapeutic applications. PMID:26216413
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
2009-08-13
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
Analysis and Parametric Investigation of Active Open Cross Section Thin Wall Beams
NASA Astrophysics Data System (ADS)
Griffiths, James
The static behaviour of active Open Cross Section Thin Wall Beams (OCSTWB) with embedded Active/Macro Fibre Composites (AFCs/MFCs) has been investigated for the purpose of advancing the fundamental theory needed in the development of advanced smart structures. An efficient code that can analyze active OCSTWB using analytical equations has been studied. Various beam examples have been investigated in order to verify this recently developed analytical active OCSTWB analysis tool. The cross sectional stiffness constants and induced force, moments and bimoment predicted by this analytical code have been compared with those predicted by the 2-D finite element beam cross section analysis codes called the Variational Asymptotic Beam Sectional (VABS) analysis and the University of Michigan VABS (UM/VABS). Good agreement was observed between the results obtained from the analytical tool and VABS. The calculated cross sectional stiffness constants and induced force/moments, the constitutive relation and the six intrinstic static equilibrium equations for OCSTWB were all used together in a first-order accurate forward difference scheme in order to determine the average twist and deflections along the beam span. In order to further verify the analytical code, the static behaviour of a number of beam examples was investigated using 3-D Finite Element Analysis (FEA). For a particular cross section, the rigid body twist and displacements were minimized with the displacements of all the nodes in the 3-D FEA model that compose the cross section. This was done for a number of cross sections along the beam span in order to recover the global beam twist and displacement profiles from the 3-D FEA results. The global twist and deflections predicted by the analytical code agreed closely with those predicted by UM/VABS and 3-D FEA. The study was completed by a parametric investigation to determine the boundary conditions and the composite ply lay-ups of the active and passive plies that
Quantum analysis of the nondegenerate optical parametric oscillator with injected signal
Coutinho dos Santos, B.; Dechoum, K.; Khoury, A.Z.; Silva, L.F. da; Olsen, M.K.
2005-09-15
In this paper we study the nondegenerate optical parametric oscillator with injected signal, both analytically and numerically. We develop a perturbation approach which allows us to find approximate analytical solutions, starting from the full equations of motion in the positive-P representation. We demonstrate the regimes of validity of our approximations via comparison with the full stochastic results. We find that, with reasonably low levels of injected signal, the system allows for demonstrations of quantum entanglement and the Einstein-Podolsky-Rosen paradox. In contrast to the normal optical parametric oscillator operating below threshold, these features are demonstrated with relatively intense fields.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
NASA Astrophysics Data System (ADS)
McKenna, C.; Berx, B.; Austin, W. E. N.
2016-01-01
The Faroe-Shetland Channel (FSC) is an important conduit for the poleward flow of Atlantic water towards the Nordic Seas and, as such, it plays an integral part in the Atlantic's thermohaline circulation. Mixing processes in the FSC are thought to result in an exchange of properties between the channel's inflow and outflow, with wider implications for this circulation; the nature of this mixing in the FSC is, however, uncertain. To constrain this uncertainty, we used a novel empirical method known as Parametric Optimum Multi-Parameter (POMP) analysis to objectively quantify the distribution of water masses in the channel in May 2013. This was achieved by using a combination of temperature and salinity measurements, as well as recently available nutrient and δ18O measurements. The outcomes of POMP analysis are in good agreement with established literature and demonstrate the benefits of representing all five water masses in the FSC. In particular, our results show the recirculation of Modified North Atlantic Water in the surface layers, and the pathways of Norwegian Sea Arctic Intermediate Water and Norwegian Sea Deep Water from north to south for the first time. In a final step, we apply the mixing fractions from POMP analysis to decompose the volume transport through the FSC by water mass. Despite a number of caveats, our study suggests that improved estimates of the volume transport of Atlantic inflow towards the Arctic and, thus, the associated poleward fluxes of salt and heat are possible. A new prospect to more accurately monitor the strength of the FSC branch of the thermohaline circulation emerges from this study.
[Non-Parametric Analysis of Radiation Risks of Mortality among Chernobyl Clean-Up Workers].
Gorsky, A I; Maksioutov, M A; Tumanov, K A; Shchukina, N V; Chekin, S Yu; Ivanov, V K
2016-01-01
Analysis of the relationship between dose and mortality from cancer and circulation diseases in the cohort of Chernobyl clean-up workers based on the data from the National Radiation and Epidemiological Registry was performed. Medical and dosimetry information on the clean-up workers, males, who got radiation doses from April 26, 1986 to April 26, 1987, which was accumulated from 1992 to 2012, was used for the analysis. The total size of the cohort was 42929 people, 12731 deaths were registered in the cohort, among them 1893 deaths from solid cancers and 5230 deaths were from circulation diseases. An average age of the workers was 39 years in 1992 and the mean dose was 164 mGy. The dose-effect relationship was estimated with the use of non-parametric analysis of survival with regard to concurrence of risks of mortality. The risks were estimated in 6 dose groups of similar size (1-70, 70-130, 130-190, 190-210, 210-230 and.230-1000 mGy). The group "1-70 mGy" was used as control. Estimated dose-effect relationship related to cancers and circulation diseases is described approximately with a linear model, coefficient of determination (the proportion of variability explained by the linear model) for cancers was 23-25% and for circulation diseases - 2-13%. The slope coefficient of the dose-effect relationship normalized to 1 Gy for the ratio of risks for cancers in the linear model was 0.47 (95% CI: -0.77, 1.71), and for circulation diseases it was 0.22 (95% CI: -0.58, 1.02). Risks coefficient (slope coefficient of excess mortality at a dose of 1 Gy) for solid cancers was 1.94 (95% CI: - 3.10, 7.00) x 10(-2) and for circulation diseases it was 0.67 (95% CI: -9.61, 11.00) x 10(-2). 137 deaths from radiation-induced cancers and 47 deaths from circulation diseases were registered during a follow up period. PMID:27534064
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…
Rasch analysis for the evaluation of rank of student response time in multiple choice examinations.
Thompson, James J; Yang, Tong; Chauvin, Sheila W
2013-01-01
The availability of computerized testing has broadened the scope of person assessment beyond the usual accuracy-ability domain to include response time analyses. Because there are contexts in which speed is important, e.g. medical practice, it is important to develop tools by which individuals can be evaluated for speed. In this paper, the ability of Rasch measurement to convert ordinal nonparametric rankings of speed to measures is examined and compared to similar measures derived from parametric analysis of response times (pace) and semi-parametric logarithmic time-scaling procedures. Assuming that similar spans of the measures were used, non-parametric methods of raw ranking or percentile-ranking of persons by questions gave statistically acceptable person estimates of speed virtually identical to the parametric or semi-parametric methods. Because no assumptions were made about the underlying time distributions with ranking, generality of conclusions was enhanced. The main drawbacks of the non-parametric ranking procedures were the lack of information on question duration and the overall assignment by the model of variance to the person by question interaction. PMID:24064578
Sava, H; Durand, L G; Cloutier, G
1999-05-01
To achieve an accurate estimation of the instantaneous turbulent velocity fluctuations downstream of prosthetic heart valves in vivo, the variability of the spectral method used to measure the mean frequency shift of the Doppler signal (i.e. the Doppler velocity) should be minimised. This paper investigates the performance of various short-time spectral parametric methods such as the short-time Fourier transform, autoregressive modelling based on two different approaches, autoregressive moving average modelling based on the Steiglitz-McBride method, and Prony's spectral method. A simulated Doppler signal was used to evaluate the performance of the above mentioned spectral methods and Gaussian noise was added to obtain a set of signals with various signal-to-noise ratios. Two different parameters were used to evaluate the performance of each method in terms of variability and accurate matching of the theoretical Doppler mean instantaneous frequency variation within the cardiac cycle. Results show that autoregressive modelling outperforms the other investigated spectral techniques for window lengths varying between 1 and 10 ms. Among the autoregressive algorithms implemented, it is shown that the maximum entropy method based on a block data processing technique gives the best results for a signal-to-noise ratio of 20 dB. However, at 10 and 0 dB, the Levinson-Durbin algorithm surpasses the performance of the maximum entropy method. It is expected that the intrinsic variance of the spectral methods can be an important source of error for the estimation of the turbulence intensity. The range of this error varies from 0.38% to 24% depending on the parameters of the spectral method and the signal-to-noise ratio. PMID:10505377
Parametric number covariance in quantum chaotic spectra
NASA Astrophysics Data System (ADS)
Vinayak, Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated. PMID:27078354
NASA Astrophysics Data System (ADS)
Dang, Wei; Mao, Pengcheng; Weng, Yuxiang
2013-07-01
We report an improved setup of femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) with a 210 fs temporal response. The system employs a Cassegrain objective to collect and focus fluorescence photons, which eliminates the interference from the coherent photons in the fluorescence amplification by temporal separation of the coherent photons and the fluorescence photons. The gain factor of the Cassegrain objective-assisted FNOPAS is characterized as 1.24 × 105 for Rhodamine 6G. Spectral corrections have been performed on the transient fluorescence spectra of Rhodamine 6G and Rhodamine 640 in ethanol by using an intrinsic calibration curve derived from the spectrum of superfluorescence, which is generated from the amplification of the vacuum quantum noise. The validity of spectral correction is illustrated by comparisons of spectral shape and peak wavelength between the corrected transient fluorescence spectra of these two dyes acquired by FNOPAS and their corresponding standard reference spectra collected by the commercial streak camera. The transient fluorescence spectra of the Rhodamine 6G were acquired in an optimized phase match condition, which gives a deviation in the peak wavelength between the retrieved spectrum and the reference spectrum of 1.0 nm, while those of Rhodamine 640 were collected in a non-optimized phase match condition, leading to a deviation in a range of 1.0-3.0 nm. Our results indicate that the improved FNOPAS can be a reliable tool in the measurement of transient fluorescence spectrum for its high temporal resolution and faithfully corrected spectrum.
Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony
2009-01-01
Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Yu, Jintao; Liang, Yi; Thompson, Simon; Cull, Grant; Wang, Lin
2014-01-01
The aim of the study was to establish a parametric transfer function to describe the relationship between ocular perfusion pressure (OPP) and blood flow (BF) in the optic nerve head (ONH). A third-order parametric theoretical model was proposed to describe the ONH OPP-BF relationship within the lower OPP range of the autoregulation curve (< 80 mmHg) based on experimentally induced BF response to a rapid intraocular pressure (IOP) increase in 6 rhesus monkeys. The theoretical and actual data fitted well and suggest that this parametric third-order transfer function can effectively describe both the linear and nonlinear feature in dynamic and static autoregulation in the ONH within the OPP range studied. It shows that the BF autoregulation fully functions when the OPP was > 40 mmHg and becomes incomplete when the OPP was < 40 mmHg. This model may be used to help investigating the features of autoregulation in the ONH under different experimental conditions. PMID:24665355
Brace, Christopher L.
2011-01-01
Purpose: Design and validate an efficient dual-slot coaxial microwave ablation antenna that produces an approximately spherical heating pattern to match the shape of most abdominal and pulmonary tumor targets.Methods: A dual-slot antenna geometry was utilized for this study. Permutations of the antenna geometry using proximal and distal slot widths from 1 to 10 mm separated by 1–20 mm were analyzed using finite-element electromagnetic simulations. From this series, the most optimal antenna geometry was selected using a two-term sigmoidal objective function to minimize antenna reflection coefficient and maximize the diameter-to-length aspect ratio of heat generation. Sensitivities to variations in tissue properties and insertion depth were also evaluated in numerical models. The most optimal dual-slot geometry of the parametric analysis was then fabricated from semirigid coaxial cable. Antenna reflection coefficients at various insertion depths were recorded in ex vivo bovine livers and compared to numerical results. Ablation zones were then created by applying 50 W for 2–10 min in simulations and ex vivo livers. Mean zone diameter, length, aspect ratio, and reflection coefficients before and after heating were then compared to a conventional monopole antenna using ANOVA with post-hoc t-tests. Statistical significance was indicated for P < 0.05.Results: Antenna performance was highly sensitive to dual-slot geometry. The best-performing designs utilized a proximal slot width of 1 mm, distal slot width of 4 mm ± 1 mm and separation of 8 mm ± 1 mm. These designs were characterized by an active choking mechanism that focused heating to the distal tip of the antenna. A dual-band resonance was observed in the most optimal design, with a minimum reflection coefficient of −20.9 dB at 2.45 and 1.25 GHz. Total operating bandwidth was greater than 1 GHz, but the desired heating pattern was achieved only near 2.45 GHz. As a result, antenna performance was
NASA Technical Reports Server (NTRS)
Halt, D. W.; Harris, W. L.
1982-01-01
The results reported here are based on applying the method of parametric differentiation (MPD) to transform the nonlinear differential equation governing small-disturbance transonic flow to a linear equation. Implicit approximate factorization and monotone methods were used to accelerate convergence of the linear problem by an order of magnitude over successive line over-relaxation. The relative merits of using MPD are discussed in comparison to conventional small-disturbance applications. Several MPD analyses are performed on an array of airfoils. A design procedure utilizing MPD is discussed and demonstrated for two nonlifting cases.
Non-parametric trend analysis of water quality data of rivers in Kansas
NASA Astrophysics Data System (ADS)
Yu, Yun-Sheng; Zou, Shimin; Whittemore, Donald
1993-09-01
Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous.
Parametric thermodynamic analysis of closed-cycle gas-laser operation in space
NASA Technical Reports Server (NTRS)
Burns, R. K.
1974-01-01
Cycle efficiency and radiator area required were calculated for thermally and electrically pumped lasers operating in closed cycles with a compressor and the required heat exchangers. A thermally pumped laser included within a Brayton cycle was also analyzed. Performance of all components, including the laser, was parametrically varied. For the thermally pumped laser the cycle efficiencies range below 10 percent and are very sensitive to the high-pressure losses associated with the supersonic diffuser required at the laser cavity exit. The efficiencies predicted for the electrically pumped laser cycles range slightly higher, but radiator area also tends to be larger.
Permutations and time series analysis.
Cánovas, Jose S; Guillamón, Antonio
2009-12-01
The main aim of this paper is to show how the use of permutations can be useful in the study of time series analysis. In particular, we introduce a test for checking the independence of a time series which is based on the number of admissible permutations on it. The main improvement in our tests is that we are able to give a theoretical distribution for independent time series. PMID:20059199
Parametric Analysis of Cyclic Phase Change and Energy Storage in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Hall, Carsie A., III; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.
1997-01-01
A parametric study on cyclic melting and freezing of an encapsulated phase change material (PCM), integrated into a solar heat receiver, has been performed. The cyclic nature of the present melt/freeze problem is relevant to latent heat thermal energy storage (LHTES) systems used to power solar Brayton engines in microgravity environments. Specifically, a physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) project was developed. Multi-conjugate effects such as the convective fluid flow of a low-Prandtl-number fluid, coupled with thermal conduction in the phase change material, containment tube and working fluid conduit were accounted for in the model. A single-band thermal radiation model was also included to quantify reradiative energy exchange inside the receiver and losses through the aperture. The eutectic LiF-CaF2 was used as the phase change material (PCM) and a mixture of He/Xe was used as the working fluid coolant. A modified version of the computer code HOTTube was used to generate results in the two-phase regime. Results indicate that parametric changes in receiver gas inlet temperature and receiver heat input effects higher sensitivity to changes in receiver gas exit temperatures.
Robust non-parametric one-sample tests for the analysis of recurrent events.
Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia
2010-12-30
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. PMID:21170908
A parametric shell analysis of the shuttle 51-L SRB AFT field joint
NASA Technical Reports Server (NTRS)
Davis, Randall C.; Bowman, Lynn M.; Hughes, Robert M., IV; Jackson, Brian J.
1990-01-01
Following the Shuttle 51-L accident, an investigation was conducted to determine the cause of the failure. Investigators at the Langley Research Center focused attention on the structural behavior of the field joints with O-ring seals in the steel solid rocket booster (SRB) cases. The shell-of-revolution computer program BOSOR4 was used to model the aft field joint of the solid rocket booster case. The shell model consisted of the SRB wall and joint geometry present during the Shuttle 51-L flight. A parametric study of the joint was performed on the geometry, including joint clearances, contact between the joint components, and on the loads, induced and applied. In addition combinations of geometry and loads were evaluated. The analytical results from the parametric study showed that contact between the joint components was a primary contributor to allowing hot gases to blow by the O-rings. Based upon understanding the original joint behavior, various proposed joint modifications are shown and analyzed in order to provide additional insight and information. Finally, experimental results from a hydro-static pressurization of a test rocket booster case to study joint motion are presented and verified analytically.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
NASA Astrophysics Data System (ADS)
Cosmidis, J.; Heggy, E.; Clifford, S. M.
2007-12-01
Laboratory dielectric characterizations of Ice-dust mixtures are crucial for the quantitative analysis of radar sounding data as for the case of the MARSIS and SHARAD experiments. Understanding the range of the dielectric properties of the Martian northmen Polar layer deposits as well as their geographical an vertical distribution result in a better topographical mapping of the basement material below the northern polar cap and help constrain the ambiguities on the identification of layering and any potential subglaciar melting. In order to achieve this task, we constructed first order modeled maps of the surface dielectric properties oh the NPLD. We first used the recent Mars Global Surveyor Thermal Emission Spectrometer (TES) thermal inertia observations in order to derive a map of the dust mass fraction in the ice at the top of the permanent cap. Then we used parametric laboratory measurements of the dielectric properties of Martian polar ice analogs with various temperatures, radar frequencies and mass fractions and compositions of dust in order to obtain the parametric dielectric maps. Thermal inertia maps have been derived from recent TES observations of the surface temperatures of Mars taken over three Mars-years from orbit 1583 to 24346. Laboratory dielectric characterization of ice-dust mixtures has been performed using TES dust calibration samples provided by the ARES group at NASA JSC. Our Maps suggest that surface dielectric properties of the northern Polar cap ranges from 2.72 to 3.23 in the 2-20 MHz band for a dust inclusion typical of Martian basalt. Parametric maps of loss tangent, penetration depth for several dust types will be presented at the conference.
Eberhard, B.J.; Harbour, J.R.; Plodinec, M.J.
1994-06-01
As part of the DWPF Startup Test Program, a parametric study has been performed to determine a range of welder operating parameters which will produce acceptable final welds for canistered waste forms. The parametric window of acceptable welds defined by this study is 90,000 {plus_minus} 15,000 lb of force, 248,000 {plus_minus} 22,000 amps of current, and 95 {plus_minus} 15 cycles (@ 60 cops) for the time of application of the current.
Plodinec, M.J.
1998-11-20
After being filled with glass, DWPF canistered waste forms will be welded closed using an upset resistance welding process. This final closure weld must be leaktight, and must remain so during extended storage at SRS. As part of the DWPF Startup Test Program, a parametric study (DWPF-WP-24) has been performed to determine a range of welder operating parameters which will produce acceptable welds. The parametric window of acceptable welds defined by this study is 90,000 + 15,000 lb of force, 248,000 + 22,000 amps of current, and 95 + 15 cycles* for the time of application of the current.
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-15
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300–1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10{sup −5}M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.
NASA Astrophysics Data System (ADS)
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-01
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300-1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10-5M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-01
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300-1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10(-5)M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process. PMID:26724012
Zhang, W.; Casademunt, J.; Vinals, J. )
1993-12-01
A stochastic formulation is introduced to study the large amplitude and high-frequency components of residual accelerations found in a typical microgravity environment (or [ital g]-jitter). The linear response of a fluid surface to such residual accelerations is discussed in detail. The analysis of the stability of a free fluid surface can be reduced in the underdamped limit to studying the equation of the parametric harmonic oscillator for each of the Fourier components of the surface displacement. A narrow-band noise is introduced to describe a realistic spectrum of accelerations, that interpolates between white noise and monochromatic noise. Analytic results for the stability of the second moments of the stochastic parametric oscillator are presented in the limits of low-frequency oscillations, and near the region of subharmonic parametric resonance. Based upon simple physical considerations, an explicit form of the stability boundary valid for arbitrary frequencies is proposed, which interpolates smoothly between the low frequency and the near resonance limits with no adjustable parameter, and extrapolates to higher frequencies. A second-order numerical algorithm has also been implemented to simulate the parametric stochastic oscillator driven with narrow-band noise. The simulations are in excellent agreement with our theoretical predictions for a very wide range of noise parameters. The validity of previous approximate theories for the particular case of Ornstein--Uhlenbeck noise is also checked numerically. Finally, the results obtained are applied to typical microgravity conditions to determine the characteristic wavelength for instability of a fluid surface as a function of the intensity of residual acceleration and its spectral width.
NASA Astrophysics Data System (ADS)
Allan, Alasdair
2014-06-01
FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.
NASA Technical Reports Server (NTRS)
To, Wing H.
2005-01-01
Quantum optical experiments require all the components involved to be extremely stable relative to each other. The stability can be "measured" by using an interferometric experiment. A pair of coherent photons produced by parametric down-conversion could be chosen to be orthogonally polarized initially. By rotating the polarization of one of the wave packets, they can be recombined at a beam splitter such that interference will occur. Theoretically, the interference will create four terms in the wave function. Two terms with both photons going to the same detector, and two terms will have the photons each going to different detectors. However, the latter will cancel each other out, thus no photons will arrive at the two detectors simultaneously under ideal conditions. The stability Of the test-bed can then be inferred by the dependence of coincidence count on the rotation angle.
Open cycle OTEC thermal-hydraulic systems analysis and parametric studies
NASA Astrophysics Data System (ADS)
Patsons, B.; Bharathan, D.; Althof, J.
1984-06-01
An analytic thermohydraulic systems model of the power cycle an seawater supply systems for an open cycle ocean thermal energy conversion (OTEC) plant has been developed that allows ready examination of the effects of system and component operating points on plant size and parasitic power requirements. This paper presents the results of three parametric studies on the effects of system temperature distribution, plant gross electric capacity, and the allowable seawater velocity in the supply and discharge pipes. The paper also briefly discusses the assumptions and equations used in the model and the state-of-the-art component limitations. The model provides a useful tool for an OTEC plant designer to evaluate system trade-offs and define component interactions and performance.
A parametric study of supersonic laminar flow for swept wings using linear stability analysis
NASA Technical Reports Server (NTRS)
Cummings, Russell M.; Garcia, Joseph A.; Tu, Eugene L.
1995-01-01
A parametric study to predict the extent of laminar flow on the upper surface of a generic swept-back wing (NACA 64A010 airfoil section) at supersonic speeds was conducted. The results were obtained by using surface pressure predictions from an Euler/Navier-Stokes computational fluid dynamics code coupled with a boundary layer code, which predicts detailed boundary layer profiles, and finally with a linear stability code to determine the extent of laminar flow. The parameters addressed are Reynolds number, angle of attack, and leading-edge wing sweep. The results of this study show that an increase in angle of attack, for specific Reynolds numbers, can actually delay transition. Therefore, higher lift capability, caused by the increased angle of attack, as well as a reduction in viscous drag due to the delay in transition is possible for certain flight conditions.
Parametric modeling for quantitative analysis of pulmonary structure to function relationships
NASA Astrophysics Data System (ADS)
Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.
2005-04-01
While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.
Accelerating pulsar timing data analysis
NASA Astrophysics Data System (ADS)
van Haasteren, Rutger
2013-02-01
The analysis of pulsar timing data, especially in pulsar timing array (PTA) projects, has encountered practical difficulties: evaluating the likelihood and/or correlation-based statistics can become prohibitively computationally expensive for large data sets. In situations where a stochastic signal of interest has a power spectral density that dominates the noise in a limited bandwidth of the total frequency domain (e.g. the isotropic background of gravitational waves), a linear transformation exists that transforms the timing residuals to a basis in which virtually all the information about the stochastic signal of interest is contained in a small fraction of basis vectors. By only considering such a small subset of these `generalized residuals', the dimensionality of the data analysis problem is greatly reduced, which can cause a large speedup in the evaluation of the likelihood: the ABC-method (Acceleration By Compression). The compression fidelity, calculable with crude estimates of the signal and noise, can be used to determine how far a data set can be compressed without significant loss of information. Both direct tests on the likelihood, and Bayesian analysis of mock data, show that the signal can be recovered as well as with an analysis of uncompressed data. In the analysis of International PTA Mock Data Challenge data sets, speedups of a factor of 3 orders of magnitude are demonstrated. For realistic PTA data sets the acceleration may become greater than six orders of magnitude due to the low signal-to-noise ratio.
Hu, Leland S.; Ning, Shuluo; Eschbacher, Jennifer M.; Gaw, Nathan; Dueck, Amylou C.; Smith, Kris A.; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J.; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O’Neill, Brian P.; Elmquist, William; Baxter, Leslie C.; Gao, Fei; Frakes, David; Karis, John P.; Zwart, Christine; Swanson, Kristin R.; Sarkaria, Jann; Wu, Teresa
2015-01-01
Background Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. Methods We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. Results We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Conclusion Multi-parametric MRI and texture analysis can help characterize and visualize GBM’s spatial histologic heterogeneity to identify regional tumor-rich biopsy targets. PMID:26599106
2013-01-01
Background Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. Results We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as “pathwise”. The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. Conclusions As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the
NASA Technical Reports Server (NTRS)
Housner, J. M.; Stein, M.
1975-01-01
A computer program is presented which was developed for the combined compression and shear of stiffened variable thickness orthotropic composite panels on discrete springs: boundary conditions are general and include elastic boundary restraints. Buckling solutions are obtained by using a newly developed trigonometric finite difference procedure which improves the solution convergence rate over conventional finite difference methods. The classical general shear buckling results which exist only for simply supported panels over a limited range of orthotropic properties, were extended to the complete range of these properties for simply supported panels and, in addition, to the complete range of orthotropic properties for clamped panels. The program was also applied to parametric studies which examine the effect of filament orientation upon the buckling of graphite-epoxy panels. These studies included an examination of the filament orientations which yield maximum shear or compressive buckling strength for panels having all four edges simply supported or clamped over a wide range of aspect ratios. Panels with such orientations had higher buckling loads than comparable, equal weight, thin skinned aluminum panels. Also included among the parameter studies were examinations of combined axial compression and shear buckling and examinations of panels with rotational elastic edge restraints.
NASA Astrophysics Data System (ADS)
Hartwig, Jason; Adin Mann, Jay; Darr, Samuel R.
2014-09-01
This paper presents the parametric investigation of the factors which govern screen channel liquid acquisition device bubble point pressure in a low pressure propellant tank. The five test parameters that were varied included the screen mesh, liquid cryogen, liquid temperature and pressure, and type of pressurant gas. Bubble point data was collected using three fine mesh 304 stainless steel screens in two different liquids (hydrogen and nitrogen), over a broad range of liquid temperatures and pressures in subcooled and saturated liquid states, using both a noncondensible (helium) and autogenous (hydrogen or nitrogen) gas pressurization scheme. Bubble point pressure scales linearly with surface tension, but does not scale inversely with the fineness of the mesh. Bubble point pressure increases proportional to the degree of subcooling. Higher bubble points are obtained using noncondensible pressurant gases over the condensable vapor. The bubble point model is refined using a temperature dependent pore diameter of the screen to account for screen shrinkage at reduced liquid temperatures and to account for relative differences in performance between the two pressurization schemes. The updated bubble point model can be used to accurately predict performance of LADs operating in future cryogenic propellant engines and cryogenic fuel depots.
NASA Technical Reports Server (NTRS)
Masunaga, Hirohiko; Kummerow, Christian D.
2005-01-01
A methodology to analyze precipitation profiles using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and precipitation radar (PR) is proposed. Rainfall profiles are retrieved from PR measurements, defined as the best-fit solution selected from precalculated profiles by cloud-resolving models (CRMs), under explicitly defined assumptions of drop size distribution (DSD) and ice hydrometeor models. The PR path-integrated attenuation (PIA), where available, is further used to adjust DSD in a manner that is similar to the PR operational algorithm. Combined with the TMI-retrieved nonraining geophysical parameters, the three-dimensional structure of the geophysical parameters is obtained across the satellite-observed domains. Microwave brightness temperatures are then computed for a comparison with TMI observations to examine if the radar-retrieved rainfall is consistent in the radiometric measurement space. The inconsistency in microwave brightness temperatures is reduced by iterating the retrieval procedure with updated assumptions of the DSD and ice-density models. The proposed methodology is expected to refine the a priori rain profile database and error models for use by parametric passive microwave algorithms, aimed at the Global Precipitation Measurement (GPM) mission, as well as a future TRMM algorithms.
NASA Astrophysics Data System (ADS)
Silveri, M.; Zalys-Geller, E.; Hatridge, M.; Leghtas, Z.; Devoret, M. H.; Girvin, S. M.
2015-03-01
In the remote entanglement process, two distant stationary qubits are entangled with separate flying qubits and the which-path information is erased from the flying qubits by interference effects. As a result, an observer cannot tell from which of the two sources a signal came and the probabilistic measurement process generates perfect heralded entanglement between the two signal sources. Notably, the two stationary qubits are spatially separated and there is no direct interaction between them. We study two transmon qubits in superconducting cavities connected to a Josephson Parametric Converter (JPC). The qubit information is encoded in the traveling wave leaking out from each cavity. Remarkably, the quantum-limited phase-preserving amplification of two traveling waves provided by the JPC can work as a which-path information eraser. By using a stochastic master approach we demonstrate the probabilistic production of heralded entangled states and that unequal qubit-cavity pairs can be made indistinguishable by simple engineering of driving fields. Additionally, we will derive measurement rates, measurement optimization strategies and discuss the effects of finite amplification gain, cavity losses, and qubit relaxations and dephasing. Work supported by IARPA, ARO and NSF.
Introduction to Time Series Analysis
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1986-01-01
The field of time series analysis is explored from its logical foundations to the most modern data analysis techniques. The presentation is developed, as far as possible, for continuous data, so that the inevitable use of discrete mathematics is postponed until the reader has gained some familiarity with the concepts. The monograph seeks to provide the reader with both the theoretical overview and the practical details necessary to correctly apply the full range of these powerful techniques. In addition, the last chapter introduces many specialized areas where research is currently in progress.
NASA Astrophysics Data System (ADS)
Branch, Allan C.
1998-01-01
Parametric mapping (PM) lies midway between older and proven artificial landmark based guidance systems and yet to be realized vision based guidance systems. It is a simple yet effective natural landmark recognition system offering freedom from the need for enhancements to the environment. Development of PM systems can be inexpensive and rapid and they are starting to appear in commercial and industrial applications. Together with a description of the structural framework developed to generically describe robot mobility, this paper illustrates clearly the parts of any mobile robot navigation and guidance system and their interrelationships. Among other things, the importance of the richness of the reference map, and not necessarily the sensor map, is introduced, the benefits of dynamic path planners to alleviate the need for separate object avoidance, and the independence of the PM system to the type of sensor input is shown.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration
NASA Astrophysics Data System (ADS)
Lemerle, P.; Höppner, O.; Rebelle, J.
2011-10-01
This paper describes the examination of the vehicle dynamics and stability of four-wheeled forklift trucks (FLTs) in cornering situations. Cornering at excessive speed is one major reason for fatal accidents with forklifts caused by lateral tipover. In order to increase the lateral stability of this kind of working machinery, the influence of certain important design properties has been studied using an appropriate vehicle simulation model and a driving simulator. The simulation model is based on a multi-body system approach and includes submodels for the propulsion system and the tyres. The driving behaviour of the operator has not been modelled. Instead, a driving simulator has been built up and a real human driver was employed for ensuring adequate and realistic model input. As there have not been any suitable standardised test manoeuvres available for FLTs, a new driving test has been developed to assess the lateral stability. This test resembles the well-known J turn/Fishhook turn, but includes a more dynamic counter-steering action. Furthermore, the dimensions of the test track are defined. Therefore, the test is better adapted to the driving dynamics of forklifts and reflects the real driver behaviour more closely. Finally, a parametrical study has been performed, examining the influence of certain important technical properties of the truck such as the maximum speed, the position of centre of gravity, rear axle design features and tyre properties. The results of this study may lead to a better understanding of the vehicle dynamics of forklifts and facilitate goal-oriented design improvements.
NASA Astrophysics Data System (ADS)
Li, Jiang-Fan; Fang, Jia-Yuan; Xiao, Fu-Liang; Liu, Xin-Hai; Wang, Cheng-Zhi
2009-03-01
By properly selecting the time-dependent unitary transformation for the linear combination of the number operators, we construct a time-dependent invariant and derive the corresponding auxiliary equations for the degenerate and non-degenerate coupled parametric down-conversion system with driving term. By means of this invariant and the Lewis-Riesenfeld quantum invariant theory, we obtain closed formulae of the quantum state and the evolution operator of the system. We show that the time evolution of the quantum system directly leads to production of various generalized one- and two-mode combination squeezed states, and the squeezed effect is independent of the driving term of the Hamiltonian. In some special cases, the current solution can reduce to the results of the previous works.
Jansen, Jacobus FA; Lu, Yonggang; Gupta, Gaorav; Lee, Nancy Y; Stambuk, Hilda E; Mazaheri, Yousef; Deasy, Joseph O; Shukla-Dave, Amita
2016-01-01
AIM: To investigate the merits of texture analysis on parametric maps derived from pharmacokinetic modeling with dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) as imaging biomarkers for the prediction of treatment response in patients with head and neck squamous cell carcinoma (HNSCC). METHODS: In this retrospective study, 19 HNSCC patients underwent pre- and intra-treatment DCE-MRI scans at a 1.5T MRI scanner. All patients had chemo-radiation treatment. Pharmacokinetic modeling was performed on the acquired DCE-MRI images, generating maps of volume transfer rate (Ktrans) and volume fraction of the extravascular extracellular space (ve). Image texture analysis was then employed on maps of Ktrans and ve, generating two texture measures: Energy (E) and homogeneity. RESULTS: No significant changes were found for the mean and standard deviation for Ktrans and ve between pre- and intra-treatment (P > 0.09). Texture analysis revealed that the imaging biomarker E of ve was significantly higher in intra-treatment scans, relative to pretreatment scans (P < 0.04). CONCLUSION: Chemo-radiation treatment in HNSCC significantly reduces the heterogeneity of tumors. PMID:26834947
Parametric Analysis of a Turbine Trip Event in a BWR Using a 3D Nodal Code
Gorzel, A.
2006-07-01
Two essential thermal hydraulics safety criteria concerning the reactor core are that even during operational transients there is no fuel melting and not-permissible cladding temperatures are avoided. A common concept for boiling water reactors is to establish a minimum critical power ratio (MCPR) for steady state operation. For this MCPR it is shown that only a very small number of fuel rods suffers a short-term dryout during the transient. It is known from experience that the limiting transient for the determination of the MCPR is the turbine trip with blocked bypass system. This fast transient was simulated for a German BWR by use of the three-dimensional reactor analysis transient code SIMULATE-3K. The transient behaviour of the hot channels was used as input for the dryout calculation with the transient thermal hydraulics code FRANCESCA. By this way the maximum reduction of the CPR during the transient could be calculated. The fast increase in reactor power due to the pressure increase and to an increased core inlet flow is limited mainly by the Doppler effect, but automatically triggered operational measures also can contribute to the mitigation of the turbine trip. One very important method is the short-term fast reduction of the recirculation pump speed which is initiated e. g. by a pressure increase in front of the turbine. The large impacts of the starting time and of the rate of the pump speed reduction on the power progression and hence on the deterioration of CPR is presented. Another important procedure to limit the effects of the transient is the fast shutdown of the reactor that is caused when the reactor power reaches the limit value. It is shown that the SCRAM is not fast enough to reduce the first power maximum, but is able to prevent the appearance of a second - much smaller - maximum that would occur around one second after the first one in the absence of a SCRAM. (author)
Hydrodynamic analysis of time series
NASA Astrophysics Data System (ADS)
Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.
2003-04-01
It was proved that balance equations for systems with corpuscular structure can be derived if a kinematic description by piece-wise analytic functions is available [1]. For example, the hydrodynamic equations for one-dimensional systems of inelastic particles, derived in [2], were used to prove the inconsistency of the Fourier law of heat with the microscopic structure of the system. The hydrodynamic description is also possible for single particle systems. In this case, averages of physical quantities associated with the particle, over a space-time window, generalizing the usual ``moving averages'' which are performed on time intervals only, were shown to be almost everywhere continuous space-time functions. Moreover, they obey balance partial differential equations (continuity equation for the 'concentration', Navier-Stokes equation, a. s. o.) [3]. Time series can be interpreted as trajectories in the space of the recorded parameter. Their hydrodynamic interpretation is expected to enable deterministic predictions, when closure relations can be obtained for the balance equations. For the time being, a first result is the estimation of the probability density for the occurrence of a given parameter value, by the normalized concentration field from the hydrodynamic description. The method is illustrated by hydrodynamic analysis of three types of time series: white noise, stock prices from financial markets and groundwater levels recorded at Krauthausen experimental field of Forschungszentrum Jülich (Germany). [1] C. Vamoş, A. Georgescu, N. Suciu, I. Turcu, Physica A 227, 81-92, 1996. [2] C. Vamoş, N. Suciu, A. Georgescu, Phys. Rev E 55, 5, 6277-6280, 1997. [3] C. Vamoş, N. Suciu, W. Blaj, Physica A, 287, 461-467, 2000.
Saunders, Marnie M; Schwentker, Edwards P; Kay, David B; Bennett, Gordon; Jacobs, Christopher R; Verstraete, Mary C; Njus, Glen O
2003-02-01
In this study, we developed an approach for prosthetic foot design incorporating motion analysis, mechanical testing and computer analysis. Using computer modeling and finite element analysis, a three-dimensional (3D), numerical foot model of the solid ankle cushioned heel (SACH) foot was constructed and analyzed based upon loading conditions obtained from the gait analysis of an amputee and validated experimentally using mechanical testing. The model was then used to address effects of viscoelastic heel performance numerically. This is just one example of the type of parametric analysis and design enabled by this approach. More importantly, by incorporating the unique gait characteristics of the amputee, these parametric analyses may lead to prosthetic feet more appropriately representing a particular user's needs, comfort and activity level. PMID:12623440
Real-time analysis keratometer
NASA Technical Reports Server (NTRS)
Adachi, Iwao P. (Inventor); Adachi, Yoshifumi (Inventor); Frazer, Robert E. (Inventor)
1987-01-01
A computer assisted keratometer in which a fiducial line pattern reticle illuminated by CW or pulsed laser light is projected on a corneal surface through lenses, a prismoidal beamsplitter quarterwave plate, and objective optics. The reticle surface is curved as a conjugate of an ideal corneal curvature. The fiducial image reflected from the cornea undergoes a polarization shift through the quarterwave plate and beamsplitter whereby the projected and reflected beams are separated and directed orthogonally. The reflected beam fiducial pattern forms a moire pattern with a replica of the first recticle. This moire pattern contains transverse aberration due to differences in curvature between the cornea and the ideal corneal curvature. The moire pattern is analyzed in real time by computer which displays either the CW moire pattern or a pulsed mode analysis of the transverse aberration of the cornea under observation, in real time. With the eye focused on a plurality of fixation points in succession, a survey of the entire corneal topography is made and a contour map or three dimensional plot of the cornea can be made as a computer readout in addition to corneal radius and refractive power analysis.
Timing analysis by model checking
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
Christopoulou, Maria; Karabetsos, Efthymios
2015-04-01
From 2008 through 2013, more than 6,000 in situ frequency selective audits, in the proximity of base stations, were conducted throughout Greece by the Greek Atomic Energy Commission (EEAE), in order to verify exposure limit compliance. EEAE is the competent national authority for protection of the general public against artificially produced non-ionizing radiation. This paper presents the first post processing and multi-parametric year statistical analysis of in situ measurement data corresponding to 4,705 audits in the whole country, compared to general public exposure levels, according to Greek legislation. The aim is to derive nationwide conclusions for the characterization of general public exposure to radiofrequency electromagnetic fields, during the last 6 years. The results' presentation includes electric field exposure ratios referring to broadband and frequency selective measurements at the highest exposure measurement point. Statistical analysis is applied to assist the data presentation and evaluation, based on selected criteria and classification parameters, including: (i) year (2008-2013); (ii) environment (urban/suburban/rural); (iii) frequency bands of selected common telecommunication services (e.g., TV, FM, GSM, DCS, UMTS); and (iv) number of service providers installed at the same site. In general, measurement results revealed that the vast majority of exposure values were below reference levels for general public exposure, as defined by Greek legislation. Data are constantly updated with the latest measurements, including emerging wireless technologies. PMID:25726724
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; Manley, Dawn Kataoka
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed naturalmore » gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.« less
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N. K. G.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; Manley, Dawn Kataoka
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed natural gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.
NASA Technical Reports Server (NTRS)
Guerreiro, Nelson M.; Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Lewis, Timothy A.
2016-01-01
A loss-of-separation (LOS) is said to occur when two aircraft are spatially too close to one another. A LOS is the fundamental unsafe event to be avoided in air traffic management and conflict detection (CD) is the function that attempts to predict these LOS events. In general, the effectiveness of conflict detection relates to the overall safety and performance of an air traffic management concept. An abstract, parametric analysis was conducted to investigate the impact of surveillance quality, level of intent information, and quality of intent information on conflict detection performance. The data collected in this analysis can be used to estimate the conflict detection performance under alternative future scenarios or alternative allocations of the conflict detection function, based on the quality of the surveillance and intent information under those conditions.Alternatively, this data could also be used to estimate the surveillance and intent information quality required to achieve some desired CD performance as part of the design of a new separation assurance system.
A multiscale approach to InSAR time series analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y. N.; Dicaprio, C.; Rickerby, A.
2008-12-01
We describe a new technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale analysis of InSAR Time Series), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. This approach also permits a consistent treatment of all data independent of the presence of localized holes in any given interferogram. In essence, MInTS allows one to considers all data at the same time (as opposed to one pixel at a time), thereby taking advantage of both spatial and temporal characteristics of the deformation field. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to includes a set of general functions (e.g., splines) in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate our results by comparing with ground-based observations.
Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.
2012-01-01
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408
Ruiz-Sanchez, Eduardo
2015-12-01
The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata. PMID:26265258
NASA Astrophysics Data System (ADS)
Nguyen, Frédéric; Hermans, Thomas
2015-04-01
Inversion of time-lapse resistivity data allows obtaining 'snapshots' of changes occurring in monitored systems for applications such as aquifer storage, geothermal heat exchange, site remediation or tracer tests. Based on these snapshots, one can infer qualitative information on the location and morphology of changes occurring in the subsurface but also quantitative estimates on the degree of changes in certain property such as temperature or total dissolved solid content. Analysis of these changes can provide direct insight into flow and transport and associated processes and controlling parameters. However, the reliability of the analysis is dependent on survey geometry, measurement schemes, data error, and regularization. Survey design parameters may be optimized prior to the monitoring survey. Regularization, on the other hand, may be chosen depending on available information collected during the monitoring. Common approaches consider smoothing model changes both in space and time but it is often needed to obtain a sharp temporal anomaly, for example in fractured aquifers. We here propose to use the alternative regularization approach based on minimum gradient support (MGS) (Zhdanov, 2002) for time-lapse surveys which will focus the changes in tomograms snapshots. MGS will limit the occurrences of changes in electrical resistivity but will also restrict the variations of these changes inside the different zones. A commonly encountered difficulty by practitioners in this type of regularization is the choice of an additional parameter, the so-called β, required to define the MGS functional. To the best of our knowledge, there is no commonly accepted or standard methodology to optimize the MGS parameter β. The inversion algorithm used in this study is CRTomo (Kemna 2000). It uses a Gauss-Newton scheme to iteratively minimize an objective function which consists of a data misfit functional and a model constraint functional. A univariate line search is performed
Parametric study of turbine blade platform friction damping using the lumped parameter analysis
NASA Technical Reports Server (NTRS)
Dominic, R. J.
1984-01-01
The hardware configuration used in the present study of turbine blade planform friction damping, by means of the lumped parameter analysis, is the first turbine stage of the Space Shuttle Main Engine's High Pressure Fuel Turbopump. The analysis procedure solves the nonlinear equations of motion for a turbine blade that is acted on by a platform friction damper, using an iterative matrix method. Attention is given to the effects on blade deflection response of variations in friction coefficient, the normal force on the friction surface interface, blade hysteretic damping, the blade-to-blade phase angle of the harmonic forcing function, and the amplitude of the forcing function.
Arahira, Shin; Murai, Hitoshi; Sasaki, Hironori
2016-08-22
In this paper we report the generation of wavelength-division-multiplexed, time-bin entangled photon pairs by using cascaded optical second nonlinearities (sum-frequency generation and subsequent spontaneous parametric downconversion) in a periodically poled LiNbO_{3} device. Visibilities of approximately 94% were clearly observed in two-photon interference experiments for all the wavelength-multiplexed channels under investigation (five pairs), with insensitivity to the polarization states of the photon pairs. We also evaluated the performances in terms of quantum-key-distribution (QKD) applications by using four single-photon detectors, which enables to evaluate the QKD performance properly. The results showed long-term stability over 70 hours, maintaining approximately 3% of the quantum error rate and 110 bit/s of the sifted key rate. PMID:27557236
Deng, Xingqiao; Potula, S; Grewal, H; Solanki, K N; Tschopp, M A; Horstemeyer, M F
2013-06-01
In this study, we investigated and assessed the dependence of dummy head injury mitigation on the side curtain airbag and occupant distance under a side impact of a Dodge Neon. Full-scale finite element vehicle simulations of a Dodge Neon with a side curtain airbag were performed to simulate the side impact. Owing to the wide range of parameters, an optimal matrix of finite element calculations was generated using the design method of experiments (DOE); the DOE method was performed to independently screen the finite element results and yield the desired parametric influences as outputs. Also, analysis of variance (ANOVA) techniques were used to analyze the finite element results data. The results clearly show that the influence of moving deformable barrier (MDB) strike velocity was the strongest influence parameter on both cases for the head injury criteria (HIC36) and the peak head acceleration, followed by the initial airbag inlet temperature. Interestingly, the initial airbag inlet temperature was only a ~30% smaller influence than the MDB velocity; also, the trigger time was a ~54% smaller influence than the MDB velocity when considering the peak head accelerations. Considering the wide range in MDB velocities used in this study, results of the study present an opportunity for design optimization using the different parameters to help mitigate occupant injury. As such, the initial airbag inlet temperature, the trigger time, and the airbag pressure should be incorporated into vehicular design process when optimizing for the head injury criteria. PMID:23567214
Moran, John L; Solomon, Patricia J
2007-06-01
In Part I, we reviewed graphical display and data summary, followed by a consideration of linear regression models. Generalised linear models, structured in terms of an exponential response distribution and link function, are now introduced, subsuming logistic and Poisson regression. Time-to-event ("survival") analysis is developed from basic principles of hazard rate, and survival, cumulative distribution and density functions. Semi-parametric (Cox) and parametric (accelerated failure time) regression models are contrasted. Time-series analysis is explicated in terms of trend, seasonal, and other cyclical and irregular components, and further illustrated by development of a classical Box-Jenkins ARMA (autoregressive moving average) model for monthly ICU-patient hospital mortality rates recorded over 11 years. Multilevel (random-effects) models and principles of meta-analysis are outlined, and the review concludes with a brief consideration of important statistical aspects of clinical trials: sample size determination, interim analysis and "early stopping". PMID:17536991
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; Sutton, Michael A.
2016-01-25
Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less
Papapanagiotou, Vasileios; Diou, Christos; Langlet, Billy; Ioakimidis, Ioannis; Delopoulos, Anastasios
2015-08-01
Monitoring and modification of eating behaviour through continuous meal weight measurements has been successfully applied in clinical practice to treat obesity and eating disorders. For this purpose, the Mandometer, a plate scale, along with video recordings of subjects during the course of single meals, has been used to assist clinicians in measuring relevant food intake parameters. In this work, we present a novel algorithm for automatically constructing a subject's food intake curve using only the Mandometer weight measurements. This eliminates the need for direct clinical observation or video recordings, thus significantly reducing the manual effort required for analysis. The proposed algorithm aims at identifying specific meal related events (e.g. bites, food additions, artifacts), by applying an adaptive pre-processing stage using Delta coefficients, followed by event detection based on a parametric Probabilistic Context-Free Grammar on the derivative of the recorded sequence. Experimental results on a dataset of 114 meals from individuals suffering from obesity or eating disorders, as well as from individuals with normal BMI, demonstrate the effectiveness of the proposed approach. PMID:26738112
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Parametric studies of penetration events : a design and analysis of experiments approach.
Chiesa, Michael L.; Marin, Esteban B.; Booker, Paul M.
2005-02-01
A numerical screening study of the interaction between a penetrator and a geological target with a preformed hole has been carried out to identify the main parameters affecting the penetration event. The planning of the numerical experiment was based on the orthogonal array OA(18,7,3,2), which allows 18 simulation runs with 7 parameters at 3 levels each. The strength of 2 of the array allows also for two-factor interaction studies. The seven parameters chosen for this study are: penetrator offset, hole diameter, hole taper, vertical and horizontal velocity of the penetrator, angle of attack of the penetrator and target material. The analysis of the simulation results has been based on main effects plots and analysis of variance (ANOVA), and it has been performed using three metrics: the maximum values of the penetration depth, penetrator deceleration and plastic strain in the penetrator case. This screening study shows that target material has a major influence on penetration depth and penetrator deceleration, while penetrator offset has the strongest effect on the maximum plastic strain.
NASA Astrophysics Data System (ADS)
Otanicar, Todd; Chowdhury, Ihtesham; Phelan, Patrick E.; Prasher, Ravi
2010-12-01
The analysis of the combined efficiencies in a coupled photovoltaic (PV)/thermal concentrating solar collector are presented based on a coupled electrical/thermal model. The calculations take into account the drop in efficiency that accompanies the operation of PV cells at elevated temperatures along with a detailed analysis of the thermal system including losses. An iterative numerical scheme is described that involves a coupled electrothermal simulation of the solar energy conversion process. In the proposed configuration losses in the PV cell due to reduced efficiencies at elevated temperatures and the incident solar energy below the PV bandgap are both harnessed as heat. This thermal energy is then used to drive a thermodynamic power cycle. The simulations show that it is possible to optimize the overall efficiency of the system by variation in key factors such as the solar concentration factor, the band gap of the PV material, and the system thermal design configuration, leading to a maximum combined efficiency of ˜32.3% for solar concentrations between 10-50 and a band-gap around 1.5-2.0 eV.
Parametric Analysis of NO2 Gas Sensor Based on Carbon Nanotubes
NASA Astrophysics Data System (ADS)
Naje, Asama N.; Ibraheem, Russul R.; Ibrahim, Fuad T.
2016-06-01
Two types of carbon nanotubes [single walled nanotubes (SWCNTs) and multi walled carbon nanotubes (MWCNTs)] are deposited on porous silicon by the drop casting technique. Upon exposure to test gas mixing ratio 3% NO2, the sensitivity response results show that the SWCNTs' sensitivity reaches to 79.8%, where MWCNTs' is 59.6%. The study shows that sensitivity response of the films increases with an increase in the operating temperature up to 200° and 150° for MWCNTs and SWCNTs. The response and recovery time is about 19 s and 54 s at 200° for MWCNTs, respectively, and 20 s and 56 s at 150° for SWCNTs.
Parametric Analysis of PWR Spent Fuel Depletion Parameters for Long-Term-Disposal Criticality Safety
DeHart, M.D.
1999-08-01
Utilization of burnup credit in criticality safety analysis for long-term disposal of spent nuclear fuel allows improved design efficiency and reduced cost due to the large mass of fissile material that will be present in the repository. Burnup-credit calculations are based on depletion calculations that provide a conservative estimate of spent fuel contents (in terms of criticality potential), followed by criticality calculations to assess the value of the effective neutron multiplication factor (k(sub)eff) for the a spent fuel cask or a fuel configuration under a variety of probabilistically derived events. In order to ensure that the depletion calculation is conservative, it is necessary to both qualify and quantify assumptions that can be made in depletion models.
NASA Astrophysics Data System (ADS)
Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang
2012-06-01
Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.
Parametric analysis of synthetic aperture radar data for characterization of deciduous forest stands
NASA Technical Reports Server (NTRS)
Wu, Shih-Tseng
1987-01-01
The SAR sensor parameters that affect the estimation of deciduous forest stand characteristics were examined using data sets for the Gulf Coastal Plain region, acquired by the NASA/JPL multipolarization airborne SAR. In the regression analysis, the mean digital-number values of the three polarization data are used as the independent variables to estimate the average tree height (HT), basal area (BA), and total-tree biomass (TBM). The following results were obtained: (1) in the case of simple regression and using 28 plots, vertical-vertical (VV) polarization yielded the largest correlation coefficients (r) in estimating HT, BA, and TBM; (2) in the case of multiple regression, the horizontal-horizontal (HH) and VV polarization combination yielded the largest r value in estimating HT, while the VH and HH polarization combination yielded the largest r values in estimating BA and TBM. With the addition of a third polarization, the increase in r values is insignificant.
Midgley, S. L. W.; Olsen, M. K.; Bradley, A. S.; Pfister, O.
2010-11-15
We examine the feasibility of generating continuous-variable multipartite entanglement in an intracavity concurrent downconversion scheme that has been proposed for the generation of cluster states by Menicucci et al. [Phys. Rev. Lett. 101, 130501 (2008)]. By calculating optimized versions of the van Loock-Furusawa correlations we demonstrate genuine quadripartite entanglement and investigate the degree of entanglement present. Above the oscillation threshold the basic cluster state geometry under consideration suffers from phase diffusion. We alleviate this problem by incorporating a small injected signal into our analysis. Finally, we investigate squeezed joint operators. While the squeezed joint operators approach zero in the undepleted regime, we find that this is not the case when we consider the full interaction Hamiltonian and the presence of a cavity. In fact, we find that the decay of these operators is minimal in a cavity, and even depletion alone inhibits cluster state formation.
Li, J.; O`Brien, T.K.
1997-12-31
A shear deformation theory including residual thermal and moisture effects is developed for the analysis of either symmetric or asymmetric laminates with midplane edge delamination under torsional loading. The theory is based on an assumed displacement field that includes shear deformation. The governing equations and boundary conditions are obtained from the principle of virtual work. The analysis of the [90/({+-}45){sub n}/({+-}45){sub n}/90]{sub s} edge crack torsion (ECT) Mode 3 test layup indicates that thee are no hygrothermal effects on the Mode 3 strain energy release rate because the laminate, and both sublaminates above and below the delamination, are symmetric layups. A further parametric study reveals that some other layups can have negligible hygrothermal effects even when the sublaminates above and below the delamination are not symmetric about their own midplanes. However, these layups may suffer from distortion after the curing process. Another interesting set of layups investigated is a class of antisymmetric laminates with [{+-}({theta}/{theta} {minus} 90){sub 2}/{theta}]{sub n} layups. It is observed that when n takes on even numbers (2 and 4), both hygrothermal and Mode 1 effects can be neglected. From this point of view, these layups provide a way to determine the Mode 3 toughness between two dissimilar layers. However, when n takes on odd numbers (1 and 3), both hygrothermal and Mode 1 effects may be strong in these layups. In particular, when {theta} equals 45{degree}, the layups are free from both hygrothermal and Mode 1 effects irrespective of n.
NASA Technical Reports Server (NTRS)
Turner, R. E.
1977-01-01
For 36 hours during April 1975, an atmospheric variability experiment was conducted. This research effort supported an observational program in which rawinsonde data, radar data, and satellite data were collected from a network of 42 stations east of the Rocky Mountains at intervals of 3 hours. This program presents data with a high degree of time resolution over a spatially and temporally extensive network. Reduction of the experiment data is intended primarily as a documentation of the checking and processing of the data and should be useful to prospective users. Various flow diagrams of the data processing procedures are described, and a complete summary of the formulas used in the data processing is provided. A wind computation scheme designed to extract as much detailed wind information as possible from the unique experiment data set is discussed. The accuracy of the thermodynamic and wind data were estimated. Errors in the thermodynamic and wind data are given.
Shiozawa, Kazue; Watanabe, Manabu; Ikehara, Takashi; Matsukiyo, Yasushi; Kogame, Michio; Shinohara, Mie; Kikuchi, Yoshinori; Shinohara, Masao; Igarashi, Yoshinori; Sumino, Yasukiyo
2016-02-01
We aimed to determine the usefulness of arrival time parametric imaging (AtPI) using contrast-enhanced ultrasonography (CEUS)with Sonazoid in the evaluation of early response to sorafenib for hepatocellular carcinoma (HCC). Thirteen ad- vanced HCC patients with low a / -fetoprotein (AFP) level (≤35 ng/mL) who received sorafenib for at least 4 weeks were enrolled in this study. CEUS was performed before and after treatment (2 weeks), and the images of the target lesion in the arterial phase were analyzed by AtPI. In the color mapping images obtained by AtPI, the mean arrival time of the contrast agent in the target lesion from the starting point (mean time: MT) was calculated. In each patient, differences between MT before and MT 2 weeks after treatment were compared. MT (+) and MT(-) groups were designated as such if the difference was 0 or greater(blood flow velocity of the lesion was reduced)and less than 0 sec(blood flow velocity of the lesion was increased), respectively. The overall survival was evaluated between the 2 groups. In the MT (+) group (7 patients) and MT (-) group (6 patients), the median survival times were 307 and 208 days, respectively, which was statistically significant. We suggest AtPI is useful for evaluating early response to sorafenib in advanced HCC patients with low AFP level. PMID:27067685
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, N.; Dicaprio, C. J.
2010-12-01
We present a technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale InSAR Time Series analysis), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. As opposed to single pixel InSAR time series techniques, MInTS takes advantage of both spatial and temporal characteristics of the deformation field. We use a weighting scheme which accounts for the presence of localized holes due to decorrelation or unwrapping errors in any given interferogram. We represent time-dependent deformation using a dictionary of general basis functions, capable of detecting both steady and transient processes. The estimation is regularized using a model resolution based smoothing so as to be able to capture rapid deformation where there are temporally dense radar acquisitions and to avoid oscillations during time periods devoid of acquisitions. MInTS also has the flexibility to explicitly parametrize known time-dependent processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). We use cross validation to choose the regularization penalty parameter in the inversion of for the time-dependent deformation field. We demonstrate MInTS using a set of 63 ERS-1/2 and 29 Envisat interferograms for Long Valley Caldera.
NASA Astrophysics Data System (ADS)
Lee, L. A.; Carslaw, K. S.; Pringle, K. J.
2012-04-01
Global aerosol contributions to radiative forcing (and hence climate change) are persistently subject to large uncertainty in successive Intergovernmental Panel on Climate Change (IPCC) reports (Schimel et al., 1996; Penner et al., 2001; Forster et al., 2007). As such more complex global aerosol models are being developed to simulate aerosol microphysics in the atmosphere. The uncertainty in global aerosol model estimates is currently estimated by measuring the diversity amongst different models (Textor et al., 2006, 2007; Meehl et al., 2007). The uncertainty at the process level due to the need to parameterise in such models is not yet understood and it is difficult to know whether the added model complexity comes at a cost of high model uncertainty. In this work the model uncertainty and its sources due to the uncertain parameters is quantified using variance-based sensitivity analysis. Due to the complexity of a global aerosol model we use Gaussian process emulation with a sufficient experimental design to make such as a sensitivity analysis possible. The global aerosol model used here is GLOMAP (Mann et al., 2010) and we quantify the sensitivity of numerous model outputs to 27 expertly elicited uncertain model parameters describing emissions and processes such as growth and removal of aerosol. Using the R package DiceKriging (Roustant et al., 2010) along with the package sensitivity (Pujol, 2008) it has been possible to produce monthly global maps of model sensitivity to the uncertain parameters over the year 2008. Global model outputs estimated by the emulator are shown to be consistent with previously published estimates (Spracklen et al. 2010, Mann et al. 2010) but now we have an associated measure of parameter uncertainty and its sources. It can be seen that globally some parameters have no effect on the model predictions and any further effort in their development may be unnecessary, although a structural error in the model might also be identified. The
Rayleigh-type parametric chemical oscillation
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation
NASA Astrophysics Data System (ADS)
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-01
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation.
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions. PMID:26429035
Parametric Analysis of Life Support Systems for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly S.; Bagdigian, Bob M.
2010-01-01
Having adopted a flexible path approach to space exploration, the National Aeronautics and Space Administration is in a process of evaluating future targets for space exploration. In order to maintain the welfare of a crew during future missions, a suite of life support technology is responsible for oxygen and water generation, carbon dioxide control, the removal of trace concentrations of organic contaminants, processing and recovery of water, and the storage and reclamation of solid waste. For each particular life support subsystem, a variety competing technologies either exist or are under aggressive development efforts. Each individual technology has strengths and weaknesses with regard to launch mass, power and cooling requirements, volume of hardware and consumables, and crew time requirements for operation. However, from a system level perspective, the favorability of each life support architecture is better assessed when the sub-system technologies are analyzed in aggregate. In order to evaluate each specific life support system architecture, the measure of equivalent system mass (ESM) was employed to benchmark system favorability. Moreover, the results discussed herein will be from the context of loop-closure with respect to the air, water, and waste sub-systems. Specifically, closure relates to the amount of consumables mass that crosses the boundary of the vehicle over the lifetime of a mission. As will be demonstrated in this manuscript, the optimal level of loop closure is heavily dependent upon mission requirements such as duration and the level of extra- vehicular activity (EVA) performed. Sub-system level trades were also considered as a function of mission duration to assess when increased loop closure is practical. Although many additional factors will likely merit consideration in designing life support systems for future missions, the ESM results described herein provide a context for future architecture design decisions toward a flexible path
Parametric Analysis of Life Support Systems for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly S.; Bagdigian, Bob M.
2011-01-01
The National Aeronautics and Space Administration is in a process of evaluating future targets for space exploration. In order to maintain the welfare of a crew during future missions, a suite of life support technology is responsible for oxygen and water generation, carbon dioxide control, the removal of trace concentrations of organic contaminants, processing and recovery of water, and the storage and reclamation of solid waste. For each particular life support subsystem, a variety competing technologies either exist or are under aggressive development efforts. Each individual technology has strengths and weaknesses with regard to launch mass, power and cooling requirements, volume of hardware and consumables, and crew time requirements for operation. However, from a system level perspective, the favorability of each life support architecture is better assessed when the sub-system technologies are analyzed in aggregate. In order to evaluate each specific life support system architecture, the measure of equivalent system mass (ESM) was employed to benchmark system favorability. Moreover, the results discussed herein will be from the context of loop-closure with respect to the air, water, and waste sub-systems. Specifically, closure relates to the amount of consumables mass that crosses the boundary of the vehicle over the lifetime of a mission. As will be demonstrated in this manuscript, the optimal level of loop closure is heavily dependent upon mission requirements such as duration and the level of extra-vehicular activity (EVA) performed. Sub-system level trades were also considered as a function of mission duration to assess when increased loop closure is practical. Although many additional factors will likely merit consideration in designing life support systems for future missions, the ESM results described herein provide a context for future architecture design decisions toward a flexible path program.
A parametric Bayesian combination of local and regional information in flood frequency analysis
NASA Astrophysics Data System (ADS)
Seidou, O.; Ouarda, T. B. M. J.; Barbet, M.; Bruneau, P.; BobéE, B.
2006-11-01
Because of their impact on hydraulic structure design as well as on floodplain management, flood quantiles must be estimated with the highest precision given available information. If the site of interest has been monitored for a sufficiently long period (more than 30-40 years), at-site frequency analysis can be used to estimate flood quantiles with a fair precision. Otherwise, regional estimation may be used to mitigate the lack of data, but local information is then ignored. A commonly used approach to combine at-site and regional information is the linear empirical Bayes estimation: Under the assumption that both local and regional flood quantile estimators have a normal distribution, the empirical Bayesian estimator of the true quantile is the weighted average of both estimations. The weighting factor for each estimator is conversely proportional to its variance. We propose in this paper an alternative Bayesian method for combining local and regional information which provides the full probability density of quantiles and parameters. The application of the method is made with the generalized extreme values (GEV) distribution, but it can be extended to other types of extreme value distributions. In this method the prior distributions are obtained using a regional log linear regression model, and then local observations are used within a Markov chain Monte Carlo algorithm to infer the posterior distributions of parameters and quantiles. Unlike the empirical Bayesian approach the proposed method works even with a single local observation. It also relaxes the hypothesis of normality of the local quantiles probability distribution. The performance of the proposed methodology is compared to that of local, regional, and empirical Bayes estimators on three generated regional data sets with different statistical characteristics. The results show that (1) when the regional log linear model is unbiased, the proposed method gives better estimations of the GEV quantiles and
Parametric Hazard Function Estimation.
1999-09-13
Version 00 Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking ofmore » the model assumptions.« less
Multipass optical parametric amplifier
Jeys, T.H.
1996-08-01
A compact, low-threshold, multipass optical parametric amplifier has been developed for the conversion of short-pulse (360-ps) 1064-nm Nd:YAG laser radiation into eye-safe 1572-nm radiation for laser ranging and radar applications. The amplifier had a threshold pump power of as low as 45{mu}J, and at three to four times this threshold pump power the amplifier converted 30{percent} of the input 1064-nm radiation into 1572-nm output radiation. {copyright} {ital 1996 Optical Society of America.}
Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.
2010-11-15
We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly {approx}10{sup 3} in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be {sigma}(B{sub T}){approx} 0.13 mag for the brightest galaxies, rising to {approx} 0.3 mag for galaxies at the faint end of our sample (B{sub T} {approx} 16). The distribution of axial ratios of low-mass ('dwarf') galaxies bears a strong resemblance to the one observed for the higher-mass ('giant') galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a {approx}7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.
Fractal and natural time analysis of geoelectrical time series
NASA Astrophysics Data System (ADS)
Ramirez Rojas, A.; Moreno-Torres, L. R.; Cervantes, F.
2013-05-01
In this work we show the analysis of geoelectric time series linked with two earthquakes of M=6.6 and M=7.4. That time series were monitored at the South Pacific Mexican coast, which is the most important active seismic subduction zone in México. The geolectric time series were analyzed by using two complementary methods: a fractal analysis, by means of the detrended fluctuation analysis (DFA) in the conventional time, and the power spectrum defined in natural time domain (NTD). In conventional time we found long-range correlations prior to the EQ-occurrences and simultaneously in NTD, the behavior of the power spectrum suggest the possible existence of seismo electric signals (SES) similar with the previously reported in equivalent time series monitored in Greece prior to earthquakes of relevant magnitude.
Time Analysis for Probabilistic Workflows
Czejdo, Bogdan; Ferragut, Erik M
2012-01-01
There are many theoretical and practical results in the area of workflow modeling, especially when the more formal workflows are used. In this paper we focus on probabilistic workflows. We show algorithms for time computations in probabilistic workflows. With time of activities more precisely modeled, we can achieve improvement in the work cooperation and analyses of cooperation including simulation and visualization.
Analysis of time series from stochastic processes
Gradisek; Siegert; Friedrich; Grabec
2000-09-01
Analysis of time series from stochastic processes governed by a Langevin equation is discussed. Several applications for the analysis are proposed based on estimates of drift and diffusion coefficients of the Fokker-Planck equation. The coefficients are estimated directly from a time series. The applications are illustrated by examples employing various synthetic time series and experimental time series from metal cutting. PMID:11088809
Hayasaka, Satoru; Du, An-Tao; Duarte, Audrey; Kornak, John; Jahng, Geon-Ho; Weiner, Michael W.; Schuff, Norbert
2007-01-01
We developed a new flexible approach for a co-analysis of multimodal brain imaging data using a non-parametric framework. In this approach, results from separate analyses on different modalities are combined using a combining function and assessed with a permutation test. This approach identifies several cross-modality relationships, such as concordance and dissociation, without explicitly modeling the correlation between modalities. We applied our approach to structural and perfusion MRI data from an Alzheimer’s disease (AD) study. Our approach identified areas of concordance, where both gray matter (GM) density and perfusion decreased together, and areas of dissociation, where GM density and perfusion did not decrease together. In conclusion, these results demonstrate the utility of this new non-parametric method to quantitatively assess the relationships between multiple modalities. PMID:16412666
NASA Astrophysics Data System (ADS)
Pageot, Damien; Operto, Stéphane; Vallée, Martin; Brossier, Romain; Virieux, Jean
2013-06-01
The development of dense networks of broad-band seismographs makes teleseismic data amenable to full-waveform inversion (FWI) methods for high-resolution lithospheric imaging. Compared to scattered-field migration, FWI seeks to involve the full seismic wavefield in the inversion. We present a parametric analysis of 2-D frequency-domain FWI in the framework of lithospheric imaging from teleseismic data to identify the main factors that impact on the quality of the reconstructed compressional (P)-wave and shear (S)-wave speed models. Compared to controlled-source seismology, the main adaptation of FWI to teleseismic configuration consists of the implementation with a scattered-filed formulation of plane-wave sources that impinge on the base of the lithospheric target located below the receiver network at an arbitrary incidence angle. Seismic modelling is performed with a hp-adaptive discontinuous Galerkin method on unstructured triangular mesh. A quasi-Newton inversion algorithm provides an approximate accounting for the Hessian operator, which contributes to reduce the footprint of the coarse acquisition geometry in the imaging. A versatile algorithm to compute the gradient of the misfit function with the adjoint-state method allows for abstraction between the forward-problem operators and the meshes that are during seismic modelling and inversion, respectively. An approximate correction for obliquity is derived for future application to real teleseismic data under the two-dimension approximation. Comparisons between the characteristic scales involved in exploration geophysics and in teleseismic seismology suggest that the resolution gain provided by full waveform technologies should be of the same order of magnitude for both applications. We first show the importance of the surface-reflected wavefield to dramatically improve the resolving power of FWI by combining tomography-like and migration-like imaging through the incorporation of the forward-scattered and the
Parametric Time-Dependent Navier-Stokes Computations for a YAV-8B Harrier in Ground Effect
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Pandya, Shishir; Ahmad, Jasim; Murman, Scott; Kwak, Dochan (Technical Monitor)
2002-01-01
The Harrier Jump Jet has the distinction of being the only powered-lift aircraft in the free world to achieve operational status and to have flown in combat. This V/STOL aircraft can take-off and land vertically or utilize very short runways by directing its four exhaust nozzles towards the ground. Transition to forward flight is achieved by rotating these nozzles into a horizontal position. Powered-lift vehicles have certain advantages over conventional strike fighters. Their V/STOL capabilities allow for safer carrier operations, smaller carrier size, and quick reaction time for troop support. Moreover, they are not dependent on vulnerable land-based runways. The AV-8A Harrier first entered service in the British Royal Air Force (RAF) during 1969, and the U.S. Marine Corps (USMC) in 1971. The AV-8B was a redesign to achieve improved payload capacity, range, and accuracy. This modified design first entered service with the USMC and RAF in 1985. The success and unique capabilities of the Harrier has prompted the design of a powered-lift version of the Joint Strike Fighter (JSF). The flowfield for the Harrier near the ground during low-speed or hover flight operations is very complex and time-dependent. A sketch of this flowfield is shown. Warm air from the fan is exhausted from the front nozzles, while a hot air/fuel mixture from the engine is exhausted from the rear nozzles. These jets strike the ground and move out radially forming a ground jet-flow. The ambient freestream, due to low-speed forward flight or - headwind during hover, opposes the jet-flow. This interaction causes the flow to separate and form a ground vortex. The multiple jets also interact with each other near the ground and form an upwash or jet fountain, which strikes the underside of the fuselage. If the aircraft is sufficiently close to the ground, the inlet can ingest ground debris and hot gases from the fountain and ground vortex. This Hot Gas Ingestion (HGI) can cause a sudden loss of
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant
2011-03-01
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for Ca
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1984-01-01
The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.
Time-Dependent Reliability Analysis
1999-10-27
FRANTIC-3 was developed to evaluate system unreliability using time-dependent techniques. The code provides two major options: to evaluate standby system unavailability or, in addition to the unavailability to calculate the total system failure probability by including both the unavailability of the system on demand as well as the probability that it will operate for an arbitrary time period following the demand. The FRANTIC-3 time dependent reliability models provide a large selection of repair and testingmore » policies applicable to standby or continously operating systems consisting of periodically tested, monitored, and non-repairable (non-testable) components. Time-dependent and test frequency dependent failures, as well as demand stress related failure, test-caused degradation and wear-out, test associated human errors, test deficiencies, test override, unscheduled and scheduled maintenance, component renewal and replacement policies, and test strategies can be prescribed. The conditional system unavailabilities associated with the downtimes of the user specified failed component are also evaluated. Optionally, the code can perform a sensitivity study for system unavailability or total failure probability to the failure characteristics of the standby components.« less
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y.; Dicaprio, C. J.
2009-12-01
We describe progress in the development of MInTS (Multiscale analysis of InSAR Time Series), an approach to constructed self-consistent time-dependent deformation observations from repeated satellite-based InSAR images of a given region. MInTS relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. In essence, MInTS allows one to considers all data at the same time as opposed to one pixel at a time, thereby taking advantage of both spatial and temporal characteristics of the deformation field. This approach also permits a consistent treatment of all data independent of the presence of localized holes due to unwrapping issues in any given interferogram. Specifically, the presence of holes is accounted for through a weighting scheme that accounts for the extent of actual data versus the area of holes associated with any given wavelet. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to include a set of general functions in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. We also experiment with the use of sparsity inducing regularization as a way to select from a large dictionary of time functions. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image
Time to the Doctorate: Multilevel Discrete-Time Hazard Analysis
ERIC Educational Resources Information Center
Wao, Hesborn O.
2010-01-01
Secondary data on 1,028 graduate students nested within 24 programs and admitted into either a Ph. D. or Ed. D. program between 1990 and 2006 at an American public university were used to illustrate the benefits of employing multilevel discrete-time hazard analysis in understanding the timing of doctorate completion in Education and the factors…
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is
Mossahebi, Sina; Zhu, Simeng; Chen, Howard; Shmuylovich, Leonid; Ghosh, Erina; Kovács, Sándor J
2014-01-01
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Siow, Y. K.; Marek, C. J.
2005-01-01
Today s modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. The major advantages associated with the addition of ITB are an increase in thermal efficiency and reduction in NOx emission. Lower temperature peak in the main combustor results in lower thermal NOx emission and lower amount of cooling air required. This study focuses on a parametric (on-design) cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The ITB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB. Visual Basic program, Microsoft Excel macrocode, and Microsoft Excel neuron code are used to facilitate Microsoft Excel software to plot engine performance versus engine design parameters. This program computes and plots the data sequentially without forcing users to open other types of plotting programs. A user s manual on how to use the program is also included in this report. Furthermore, this stand-alone program is written in conjunction with an off-design program which is an extension of this study. The computed result of a selected design
Mossahebi, Sina; Zhu, Simeng; Chen, Howard; Shmuylovich, Leonid; Ghosh, Erina; Kovács, Sándor J.
2014-01-01
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.
1993-01-01
Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.
Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham
2016-01-01
A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons
Javed, Sajid; Marsay, Leanne; Wareham, Alice; Lewandowski, Kuiama S; Williams, Ann; Dennis, Michael J; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham; Kempsell, Karen E
2016-01-01
A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an 'early' FOS-associated response, to a 'late' predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons between
Peterson, James T.
1999-12-01
Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network.
Ensemble vs. time averages in financial time series analysis
NASA Astrophysics Data System (ADS)
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
NASA Astrophysics Data System (ADS)
Bodnar, J. L.; Brahim, S.; Grossel, P.; Detalle, V.
2012-11-01
The aim of this article is to approach thermal-diffusivity measurement possibilities, under low energy constraints, offered by a rear-face random photothermal analysis. A theoretical study demonstrates first the method's feasibility. It shows then that the random method allows a good estimation of thermal diffusivity with a low temperature rise in the studied sample. This constitutes an advantage for the thermophysical characterization of fragile materials (artworks, biological samples). A study, experimental, carried out around the thermophysical characterization of a glass sample validates the possibilities of the random photothermal method for thermal-diffusivity measurements.
A flexible time recording and time correlation analysis system
NASA Astrophysics Data System (ADS)
Shenhav, Nathan J.; Leiferman, Gabriel; Segal, Yitzhak; Notea, Amos
1983-02-01
A system was developed to digitize and record the time intervals between detection event pulses, feed to its input channels from a detection device. The accumulated data is transferred continuously in real time to a dise through a PDP 11/34 minicomputer. Even though the system was designed for a specific scope, i.e., the comparative study of passive neutron nondestructive assay methods. It can be characterized by its features as a general purpose time series recorder. The time correlation analysis is performed by software after completion of the data accumulation. The digitizing clock period is selectable and any value, larger than a minimum of 100 ns may be selected. Bursts of up to 128 events with a frequency up to 10 MHz may be recorded. With the present recorder-minicomputer combination, the maximal average recording frequency is 40 kHz.
Statistical issues in the analysis of adverse events in time-to-event data.
Allignol, Arthur; Beyersmann, Jan; Schmoor, Claudia
2016-07-01
The aim of this work is to shed some light on common issues in the statistical analysis of adverse events (AEs) in clinical trials, when the main outcome is a time-to-event endpoint. To begin, we show that AEs are always subject to competing risks. That is, the occurrence of a certain AE may be precluded by occurrence of the main time-to-event outcome or by occurrence of another (fatal) AE. This has raised concerns on 'informative' censoring. We show that, in general, neither simple proportions nor Kaplan-Meier estimates of AE occurrence should be used, but common survival techniques for hazards that censor the competing event are still valid, but incomplete analyses. They must be complemented by an analogous analysis of the competing event for inference on the cumulative AE probability. The commonly used incidence rate (or incidence density) is a valid estimator of the AE hazard assuming it to be time constant. An estimator of the cumulative AE probability can be derived if the incidence rate of AE is combined with an estimator of the competing hazard. We discuss less restrictive analyses using non-parametric and semi-parametric approaches. We first consider time-to-first-AE analyses and then briefly discuss how they can be extended to the analysis of recurrent AEs. We will give a practical presentation with illustration of the methods by a simple example. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26929180
Nonlinear Analysis of Surface EMG Time Series
NASA Astrophysics Data System (ADS)
Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-04-01
Applications of nonlinear analysis of surface electromyography time series of patients with and without low back pain are presented. Limitations of the standard methods based on the power spectrum are discussed.
Compton suppression through rise-time analysis.
Selvi, S; Celiktas, C
2007-11-01
We studied Compton suppression for 60Co and 137Cs radioisotopes using a signal selection criterion based on contrasting the fall time of the signals composing the photo peak with those composing the Compton continuum. The fall time criterion is employed by using the pulse shape analysis observing the change in the fall times of the gamma-ray pulses. This change is determined by measuring the changes in the rise times related to the fall time of the scintillator and the timing signals related to the fall time of the input signals. We showed that Compton continuum suppression is achieved best via the precise timing adjustment of an analog rise-time analyzer connected to a NaI(Tl) scintillation spectrometer. PMID:17703943
A parametric approach for the estimation of the instantaneous speed of rotating machinery
NASA Astrophysics Data System (ADS)
Rodopoulos, Konstantinos; Yiakopoulos, Christos; Antoniadis, Ioannis
2014-02-01
A parametric method is proposed for the estimation of the instantaneous speed of rotating machines. The method belongs typically to the class of eigenvalue based parametric signal processing methods. The major advantage of parametric methods over frequency domain or time-frequency domain based methods, is their increased resolution and their reduced computational cost. Moreover, advantages of eigenvalue based methods over other parametric methods include their robustness to noise. Sensitivity analysis for the key parameters of the proposed method is performed, including the sampling frequency, the signal length and the robustness to noise. The effectiveness of the method is demonstrated in vibration measurements from a test rig during start-up and run-down, as well as during variations of the speed of a motorcycle engine. Compared to the Hilbert Transform and to the Discrete Energy Separation Algorithm (DESA), the proposed approach exhibits a better behavior, while it simultaneously presents computational simplicity, being able to be implemented analytically, even online.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Fractal analysis of time varying data
Vo-Dinh, Tuan; Sadana, Ajit
2002-01-01
Characteristics of time varying data, such as an electrical signal, are analyzed by converting the data from a temporal domain into a spatial domain pattern. Fractal analysis is performed on the spatial domain pattern, thereby producing a fractal dimension D.sub.F. The fractal dimension indicates the regularity of the time varying data.
Analysis of soybean flowering-time genes
Technology Transfer Automated Retrieval System (TEKTRAN)
Control of soybean flowering time is important for geographic adaptation, and maximizing yield. RT-PCR analysis was performed using primers synthesized for a number of putative flowering-time genes based on homology of soybean EST and genomic sequences to Arabidopsis genes. RNA for cDNA synthesis ...
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
NASA Astrophysics Data System (ADS)
Rawles, Christopher; Thurber, Clifford
2015-08-01
We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set
Experience with parametric binary dissection
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Parametric Binary Dissection (PBD) is a new algorithm that can be used for partitioning graphs embedded in 2- or 3-dimensional space. It partitions explicitly on the basis of nodes + (lambda)x(edges cut), where lambda is the ratio of time to communicate over an edge to the time to compute at a node. The new algorithm is faster than the original binary dissection algorithm and attempts to obtain better partitions than the older algorithm, which only takes nodes into account. The performance of parametric dissection with plain binary dissection on 3 large unstructured 3-d meshes obtained from computational fluid dynamics and on 2 random graphs were compared. It was showm that the new algorithm can usually yield partitions that are substantially superior, but that its performance is heavily dependent on the input data.
Multifractal Analysis of Sunspot Number Time Series
NASA Astrophysics Data System (ADS)
Kasde, Satish Kumar; Gwal, Ashok Kumar; Sondhiya, Deepak Kumar
2016-07-01
Multifractal analysis based approaches have been recently developed as an alternative framework to study the complex dynamical fluctuations in sunspot numbers data including solar cycles 20 to 23 and ascending phase of current solar cycle 24.To reveal the multifractal nature, the time series data of monthly sunspot number are analyzed by singularity spectrum and multi resolution wavelet analysis. Generally, the multifractility in sunspot number generate turbulence with the typical characteristics of the anomalous process governing the magnetosphere and interior of Sun. our analysis shows that singularities spectrum of sunspot data shows well Gaussian shape spectrum, which clearly establishes the fact that monthly sunspot number has multifractal character. The multifractal analysis is able to provide a local and adaptive description of the cyclic components of sunspot number time series, which are non-stationary and result of nonlinear processes. Keywords: Sunspot Numbers, Magnetic field, Multifractal analysis and wavelet Transform Techniques.
Entropic Analysis of Electromyography Time Series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Sung, Paul
2005-03-01
We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.
Delay Differential Analysis of Time Series
Lainscsek, Claudia; Sejnowski, Terrence J.
2015-01-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time
NASA Astrophysics Data System (ADS)
Tramutoli, Valerio; Coviello, Irina; Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Lisi, Mariano; Makris, John P.; Paciello, Rossana; Pergola, Nicola; Satriano, Valeria; vallianatos, filippos
2015-04-01
Real-time integration of multi-parametric observations is expected to significantly contribute to the development of operational systems for time-Dependent Assessment of Seismic Hazard (t-DASH) and earthquake short term (from days to weeks) forecast. However a very preliminary step in this direction is the identification of those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated to the complex process of preparation of major earthquakes. In this paper one of these parameter (the Earth's emitted radiation in the Thermal Infra-Red spectral region) is considered for its possible correlation with M≥4 earthquakes occurred in Greece in between 2004 and 2013. The RST (Robust Satellite Technique) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. Taking into account physical models proposed for justifying the existence of a correlation among TIR anomalies and earthquakes occurrence, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability - CSEP - Project) have been defined to drive the correlation analysis process. The analysis shows that more than 93% of all identified SSTAs occur in the pre-fixed space-time window around (M≥4) earthquakes time and location of occurrence with a false positive rate smaller than 7%. Achieved results, and particularly the very low rate of false positives registered on a so long testing period, seems already sufficient (at least) to qualify TIR anomalies (identified by RST approach and RETIRA index) among the parameters to be considered in the framework of a multi-parametric approach to time-Dependent Assessment of
Sensitivity to censored-at-random assumption in the analysis of time-to-event endpoints.
Lipkovich, Ilya; Ratitch, Bohdana; O'Kelly, Michael
2016-05-01
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time-to-event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time-to-event outcomes could be extended in a clinically meaningful and interpretable way to stress-test the assumption of ignorable censoring. We focus on a 'tipping point' approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi-parametric, and non-parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time-to-event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26997353
Multiscale InSAR Time Series (MInTS) analysis of surface deformation
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Musé, P.; Simons, M.; Lin, Y. N.; Agram, P. S.; Dicaprio, C. J.
2012-02-01
We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
Multiscale InSAR Time Series (MInTS) analysis of surface deformation
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, Y. N.; Agram, P. S.; DiCaprio, C. J.
2011-12-01
We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, such that coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least-squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
Analysis of STIS time-tag data
NASA Technical Reports Server (NTRS)
Lindler, Don J.; Gull, Theodore R.; Kraemer, Steven B.; Hulbert, Stephen J.
1997-01-01
Very high time resolution data can be obtained from the Space Telescope Imaging Spectrograph (STIS) Multi-Anode Microchannel Array (MAMA) detectors using the time-tag observing mode. In this mode, the photon events are not accumulated onboard the spacecraft. Instead, each event is recorded internally and transmitted to the ground as an X and Y location with an event time. Event times are recorded in units of 125 microseconds. Analysis of STIS Crab Pulsar data demonstrates that a time resolution of approaching 125 microseconds can be achieved. Furthermore, the time-tag observing mode has been demonstrated to be a very powerful diagnostic tool and can be used to increase the resolution of both imaging and spectral data.
Frequency domain optical parametric amplification
Schmidt, Bruno E.; Thiré, Nicolas; Boivin, Maxime; Laramée, Antoine; Poitras, François; Lebrun, Guy; Ozaki, Tsuneyuki; Ibrahim, Heide; Légaré, François
2014-01-01
Today’s ultrafast lasers operate at the physical limits of optical materials to reach extreme performances. Amplification of single-cycle laser pulses with their corresponding octave-spanning spectra still remains a formidable challenge since the universal dilemma of gain narrowing sets limits for both real level pumped amplifiers as well as parametric amplifiers. We demonstrate that employing parametric amplification in the frequency domain rather than in time domain opens up new design opportunities for ultrafast laser science, with the potential to generate single-cycle multi-terawatt pulses. Fundamental restrictions arising from phase mismatch and damage threshold of nonlinear laser crystals are not only circumvented but also exploited to produce a synergy between increased seed spectrum and increased pump energy. This concept was successfully demonstrated by generating carrier envelope phase stable, 1.43 mJ two-cycle pulses at 1.8 μm wavelength. PMID:24805968
Quantum Cylindrical Waves and Parametrized Field Theory
NASA Astrophysics Data System (ADS)
Varadarajan, Madhavan
In this article, we review some illustrative results in the study of two related toy models for quantum gravity, namely cylindrical waves (which are cylindrically symmetric gravitational fields)and parametrized field theory (which is just free scalar field theory on a flat space-time in generally covariant disguise). In the former, we focus on the phenomenon of unexpected large quantum gravity effects in regions of weak classical gravitational fields and on an analysis of causality in a quantum geometry. In the latter, we focus on Dirac quantization, argue that this is related to the unitary implementability of free scalar field evolution along curved foliations of the flat space-time and review the relevant results for unitary implementability.
Real time analysis of voiced sounds
NASA Technical Reports Server (NTRS)
Hong, J. P. (Inventor)
1976-01-01
A power spectrum analysis of the harmonic content of a voiced sound signal is conducted in real time by phase-lock-loop tracking of the fundamental frequency, (f sub 0) of the signal and successive harmonics (h sub 1 through h sub n) of the fundamental frequency. The analysis also includes measuring the quadrature power and phase of each frequency tracked, differentiating the power measurements of the harmonics in adjacent pairs, and analyzing successive differentials to determine peak power points in the power spectrum for display or use in analysis of voiced sound, such as for voice recognition.
Productivity improvement through cycle time analysis
NASA Astrophysics Data System (ADS)
Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio
1996-09-01
A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.
Parametric spatiotemporal oscillation in reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Ghosh, Shyamolina; Ray, Deb Shankar
2016-03-01
We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions.
Parametric spatiotemporal oscillation in reaction-diffusion systems.
Ghosh, Shyamolina; Ray, Deb Shankar
2016-03-01
We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions. PMID:27078346
Analysis of real-time vibration data
Safak, E.
2005-01-01
In recent years, a few structures have been instrumented to provide continuous vibration data in real time, recording not only large-amplitude motions generated by extreme loads, but also small-amplitude motions generated by ambient loads. The main objective in continuous recording is to track any changes in structural characteristics, and to detect damage after an extreme event, such as an earthquake or explosion. The Fourier-based spectral analysis methods have been the primary tool to analyze vibration data from structures. In general, such methods do not work well for real-time data, because real-time data are mainly composed of ambient vibrations with very low amplitudes and signal-to-noise ratios. The long duration, linearity, and the stationarity of ambient data, however, allow us to utilize statistical signal processing tools, which can compensate for the adverse effects of low amplitudes and high noise. The analysis of real-time data requires tools and techniques that can be applied in real-time; i.e., data are processed and analyzed while being acquired. This paper presents some of the basic tools and techniques for processing and analyzing real-time vibration data. The topics discussed include utilization of running time windows, tracking mean and mean-square values, filtering, system identification, and damage detection.
Topological analysis of chaotic time series
NASA Astrophysics Data System (ADS)
Gilmore, Robert
1997-10-01
Topological methods have recently been developed for the classification, analysis, and synthesis of chaotic time series. These methods can be applied to time series with a Lyapunov dimension less than three. The procedure determines the stretching and squeezing mechanisms which operate to create a strange attractor and organize all the unstable periodic orbits in the attractor in a unique way. Strange attractors are identified by a set of integers. These are topological invariants for a two dimensional branched manifold, which is the infinite dissipation limit of the strange attractor. It is remarkable that this topological information can be extracted from chaotic time series. The data required for this analysis need not be extensive or exceptionally clean. The topological invariants: (1) are subject to validation/invalidation tests; (2) describe how to model the data; and (3) do not change as control parameters change. Topological analysis is the first step in a doubly discrete classification scheme for strange attractors. The second discrete classification involves specification of a 'basis set' set of periodic orbits whose presence forces the existence of all other periodic orbits in the strange attractor. The basis set of orbits does change as control parameters change. Quantitative models developed to describe time series data are tested by the methods of entrainment. This analysis procedure has been applied to analyze a number of data sets. Several analyses are described.
Nonlinear time-series analysis revisited
NASA Astrophysics Data System (ADS)
Bradley, Elizabeth; Kantz, Holger
2015-09-01
In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data—typically univariate—via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems.
Nonlinear time-series analysis revisited.
Bradley, Elizabeth; Kantz, Holger
2015-09-01
In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data-typically univariate-via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems. PMID:26428563
Effective Analysis of Reaction Time Data
ERIC Educational Resources Information Center
Whelan, Robert
2008-01-01
Most analyses of reaction time (RT) data are conducted by using the statistical techniques with which psychologists are most familiar, such as analysis of variance on the sample mean. Unfortunately, these methods are usually inappropriate for RT data, because they have little power to detect genuine differences in RT between conditions. In…
Nonlinear Time Series Analysis via Neural Networks
NASA Astrophysics Data System (ADS)
Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin
This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.
NASA Astrophysics Data System (ADS)
Mouzourides, P.; Kyprianou, A.; Neophytou, M. K.-A.
2013-12-01
Urban morphology characterization is crucial for the parametrization of boundary-layer development over urban areas. One complexity in such a characterization is the three-dimensional variation of the urban canopies and textures, which are customarily reduced to and represented by one-dimensional varying parametrization such as the aerodynamic roughness length and zero-plane displacement . The scope of the paper is to provide novel means for a scale-adaptive spatially-varying parametrization of the boundary layer by addressing this 3-D variation. Specifically, the 3-D variation of urban geometries often poses questions in the multi-scale modelling of air pollution dispersion and other climate or weather-related modelling applications that have not been addressed yet, such as: (a) how we represent urban attributes (parameters) appropriately for the multi-scale nature and multi-resolution basis of weather numerical models, (b) how we quantify the uniqueness of an urban database in the context of modelling urban effects in large-scale weather numerical models, and (c) how we derive the impact and influence of a particular building in pre-specified sub-domain areas of the urban database. We illustrate how multi-resolution analysis (MRA) addresses and answers the afore-mentioned questions by taking as an example the Central Business District of Oklahoma City. The selection of MRA is motivated by its capacity for multi-scale sampling; in the MRA the "urban" signal depicting a city is decomposed into an approximation, a representation at a higher scale, and a detail, the part removed at lower scales to yield the approximation. Different levels of approximations were deduced for the building height and planar packing density . A spatially-varying characterization with a scale-adaptive capacity is obtained for the boundary-layer parameters (aerodynamic roughness length and zero-plane displacement ) using the MRA-deduced results for the building height and the planar packing
Wang, Jianming; Ke, Chunlei; Yu, Zhinuan; Fu, Lei; Dornseif, Bruce
2016-05-01
For clinical trials with time-to-event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre-specified number of deaths. Often, correlated surrogate information, such as time-to-progression (TTP) and progression-free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression-free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26689725
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Jin, Rui-Bo; Sasaki, Masahide
2015-04-01
In spontaneous parametric down conversion (SPDC) based quantum information processing (QIP) experiments, there is a tradeoff between the coincidence count rates (i.e. the pumping power of the SPDC), which limits the rate of the protocol, and the visibility of the quantum interference, which limits the quality of the protocol. This tradeoff is mainly caused by the multi-photon pair emissions from the SPDCs. In theory, the problem is how to model the experiments without truncating these multi-photon emissions while including practical imperfections. In this paper, we establish a method to theoretically simulate SPDC-based QIPs which fully incorporates the effect of multi-photon emissions and various practical imperfections. The key ingredient in our method is the application of the characteristic function formalism which has been used in continuous variable QIPs. We apply our method to three examples, the Hong-Ou-Mandel interference and the Einstein-Podolsky-Rosen interference experiments, and the concatenated entanglement swapping protocol. For the first two examples, we show that our theoretical results quantitatively agree with the recent experimental results. Also we provide the closed expressions for these interference visibilities with the full multi-photon components and various imperfections. For the last example, we provide the general theoretical form of the concatenated entanglement swapping protocol in our method and show the numerical results up to five concatenations. Our method requires only a small computational resource (a few minutes by a commercially available computer), which was not possible in the previous theoretical approach. Our method will have applications in a wide range of SPDC-based QIP protocols with high accuracy and a reasonable computational resource.
NASA Astrophysics Data System (ADS)
Rezaee, Mousa; Jahangiri, Reza
2015-05-01
In this study, in the presence of supersonic aerodynamic loading, the nonlinear and chaotic vibrations and stability of a simply supported Functionally Graded Piezoelectric (FGP) rectangular plate with bonded piezoelectric layer have been investigated. It is assumed that the plate is simultaneously exposed to the effects of harmonic uniaxial in-plane force and transverse piezoelectric excitations and aerodynamic loading. It is considered that the potential distribution varies linearly through the piezoelectric layer thickness, and the aerodynamic load is modeled by the first order piston theory. The von-Karman nonlinear strain-displacement relations are used to consider the geometrical nonlinearity. Based on the Classical Plate Theory (CPT) and applying the Hamilton's principle, the nonlinear coupled partial differential equations of motion are derived. The Galerkin's procedure is used to reduce the equations of motion to nonlinear ordinary differential Mathieu equations. The validity of the formulation for analyzing the Limit Cycle Oscillation (LCO), aero-elastic stability boundaries is accomplished by comparing the results with those of the literature, and the convergence study of the FGP plate is performed. By applying the Multiple Scales Method, the case of 1:2 internal resonance and primary parametric resonance are taken into account and the corresponding averaged equations are derived and analyzed numerically. The results are provided to investigate the effects of the forcing/piezoelectric detuning parameter, amplitude of forcing/piezoelectric excitation and dynamic pressure, on the nonlinear dynamics and chaotic behavior of the FGP plate. It is revealed that under the certain conditions, due to the existence of bi-stable region of non-trivial solutions, system shows the hysteretic behavior. Moreover, in absence of airflow, it is observed that variation of control parameters leads to the multi periodic and chaotic motions.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; Wang, Hailong; Wang, Minghuai; Zhao, Chun
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.
NASA Astrophysics Data System (ADS)
Dhote, Yogesh; Thombre, Shashikant
2016-05-01
This paper presents the thermal performance of the proposed double flow natural convection solar air heater with in-built liquid (oil) sensible heat storage. Unused engine oil was used as thermal energy storage medium due to its good heat retaining capacity even at high temperatures without evaporation. The performance evaluation was carried out for a day of the month March for the climatic conditions of Nagpur (India). A self reliant computational model was developed using computational tool as C++. The program developed was self reliant and compute the performance parameters for any day of the year and would be used for major cities in India. The effect of change in storage oil quantity and the inclination (tilt angle) on the overall efficiency of the solar air heater was studied. The performance was tested initially at different storage oil quantities as 25, 50, 75 and 100 l for a plate spacing of 0.04 m with an inclination of 36o. It has been found that the solar air heater gives the best performance at a storage oil quantity of 50 l. The performance of the proposed solar air heater is further tested for various combinations of storage oil quantity (50, 75 and 100 l) and the inclination (0o, 15o, 30o, 45o, 60o, 75o, 90o). It has been found that the proposed solar air heater with in-built oil storage shows its best performance for the combination of 50 l storage oil quantity and 60o inclination. Finally the results of the parametric study was also presented in the form of graphs carried out for a fixed storage oil quantity of 25 l, plate spacing of 0.03 m and at an inclination of 36o to study the behaviour of various heat transfer and fluid flow parameters of the solar air heater.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; et al
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Visibility Graph Based Time Series Analysis
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115
Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging
NASA Technical Reports Server (NTRS)
Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William; Ehgamberdiev, Shuhrat; Landenkov, Oleg
1999-01-01
Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.
Algorithm for parametric community detection in networks.
Bettinelli, Andrea; Hansen, Pierre; Liberti, Leo
2012-07-01
Modularity maximization is extensively used to detect communities in complex networks. It has been shown, however, that this method suffers from a resolution limit: Small communities may be undetectable in the presence of larger ones even if they are very dense. To alleviate this defect, various modifications of the modularity function have been proposed as well as multiresolution methods. In this paper we systematically study a simple model (proposed by Pons and Latapy [Theor. Comput. Sci. 412, 892 (2011)] and similar to the parametric model of Reichardt and Bornholdt [Phys. Rev. E 74, 016110 (2006)]) with a single parameter α that balances the fraction of within community edges and the expected fraction of edges according to the configuration model. An exact algorithm is proposed to find optimal solutions for all values of α as well as the corresponding successive intervals of α values for which they are optimal. This algorithm relies upon a routine for exact modularity maximization and is limited to moderate size instances. An agglomerative hierarchical heuristic is therefore proposed to address parametric modularity detection in large networks. At each iteration the smallest value of α for which it is worthwhile to merge two communities of the current partition is found. Then merging is performed and the data are updated accordingly. An implementation is proposed with the same time and space complexity as the well-known Clauset-Newman-Moore (CNM) heuristic [Phys. Rev. E 70, 066111 (2004)]. Experimental results on artificial and real world problems show that (i) communities are detected by both exact and heuristic methods for all values of the parameter α; (ii) the dendrogram summarizing the results of the heuristic method provides a useful tool for substantive analysis, as illustrated particularly on a Les Misérables data set; (iii) the difference between the parametric modularity values given by the exact method and those given by the heuristic is
Timing analysis of PWR fuel pin failures
Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J. ); Straka, M. )
1992-09-01
This report discusses research conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin burnup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PF1/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design burnup. Using peaking factors commensurate with actual burnups would result in longer intervals for both reactor designs. This document provides appendices K and L of this report which provide plots for the timing analysis of PWR fuel pin failures for Oconee and Seabrook respectively.
Time-series analysis for ambient concentrations
NASA Astrophysics Data System (ADS)
González-Manteiga, W.; Prada-Sánchez, J. M.; Cao, R.; García-Jurado, I.; Febrero-Bande, M.; Lucas-Domínguez, T.
In this paper we present a dynamic system which has been implemented to predict, every 5 min, the ambient concentrations of SO 2 in the neighbourhood of a power station run by ENDESA, the National Electricity Company of Spain, in As Pontes. This prediction task is very important in order to prevent a high ground-level of concentration of SO 2. For forecasting we use a mixed model which has a parametric component and a nonparametric one. We also construct confidence intervals for future observations using bootstrap and classical techniques.
Natural Time Analysis of Seismicity: Recent Results
NASA Astrophysics Data System (ADS)
Varotsos, P.; Uyeda, S.; Sarlis, N. V.; Skordas, E. S.; Nagao, T.; Kamogawa, M.
2013-12-01
Natural time analysis introduced almost a decade ago[1] may uncover novel dynamic features hidden in the time series of complex systems and has been applied[2] to diverse fields. For a time series comprising N events, the natural time for the occurrence of the k-th event of energy Qk is defined by χk=k/N and the analysis is made by studying the evolution of the pair (χk,pk ), where pk=Qk/ΣQn is the normalized energy. In natural time analysis of seismicity, the variance κ1 of natural time χ weighted for pk calculated from seismic catalogues serves as an order parameter [2]. The Japan seismic catalog was analyzed in natural time by employing a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. This is a crucial time scale since it corresponds to the average lead time of the observed Seismic Electric Signals (SES) activities [2]. The following results are obtained: First, the fluctuations of the order parameter of seismicity exhibit [3] a clearly detectable minimum approximately at the time of the initiation of the pronounced SES activity observed [4] almost two months before the onset of the volcanic-seismic swarm activity in 2000 in the Izu Island region, Japan. This is the first time that before the occurrence of major earthquakes, anomalous changes are found to appear almost simultaneously in two different geophysical observables. Second, these two phenomena were shown to be also linked in space[3]. Third, minima of the order parameter fluctuations of seismicity were observed [5] a few months before all shallow earthquakes of magnitude 7.6 or larger that occurred from 1 January 1984 to 11 March 2011 (the day of the M9 Tohoku earthquake) in Japanese area. Among these minima, the minimum before the M9 Tohoku earthquake was the deepest. Additional recent results are forwarded which shed more light on the importance of the aforementioned minima for earthquake prediction purposes. [1] Varotsos, P. A
Time-Series Analysis: A Cautionary Tale
NASA Technical Reports Server (NTRS)
Damadeo, Robert
2015-01-01
Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.
Parametric mapping of contrasted ovarian transvaginal sonography.
Korhonen, Katrina; Moore, Ryan; Lyshchik, Andrej; Fleischer, Arthur C
2015-06-01
The purpose of this study was to assess the accuracy of parametric analysis of transvaginal contrast-enhanced ultrasound (TV-CEUS) for distinguishing benign versus malignant ovarian masses. A total of 48 ovarian masses (37 benign and 11 borderline/malignant) were examined with TV-CEUS (Definity; Lantheus, North Billerica, MA; Philips iU22; Philips Medical Systems, Bothell, WA). Parametric images were created offline with a quantification software (Bracco Suisse SA, Geneva, Switzerland) with map color scales adjusted such that abnormal hemodynamics were represented by the color red and the presence of any red color could be used to differentiate benign and malignant tumors. Using these map color scales, low values of the perfusion parameter were coded in blue, and intermediate values of the perfusion parameter were coded in yellow. Additionally, for each individual color (red, blue, or yellow), a darker shade of that color indicated a higher intensity value. Our study found that the parametric mapping method was considerably more sensitive than standard region of interest (ROI) analysis for the detection of malignant tumors but was also less specific than standard ROI analysis. Parametric mapping allows for stricter cutoff criteria, as hemodynamics are visualized on a finer scale than ROI analyses, and as such, parametric maps are a useful addition to TV-CEUS analysis by allowing ROIs to be limited to areas of the highest malignant potential. PMID:26002525
Natural Time Analysis and Complex Networks
NASA Astrophysics Data System (ADS)
Sarlis, Nicholas; Skordas, Efthimios; Lazaridou, Mary; Varotsos, Panayiotis
2013-04-01
Here, we review the analysis of complex time series in a new time domain, termed natural time, introduced by our group [1,2]. This analysis conforms to the desire to reduce uncertainty and extract signal information as much as possible [3]. It enables [4] the distinction between the two origins of self-similarity when analyzing data from complex systems, i.e., whether self-similarity solely results from long-range temporal correlations (the process's memory only) or solely from the process's increments infinite variance (heavy tails in their distribution). Natural time analysis captures the dynamical evolution of a complex system and identifies [5] when the system enters a critical stage. Hence, this analysis plays a key role in predicting forthcoming catastrophic events in general. Relevant examples, compiled in a recent monograph [6], have been presented in diverse fields, including Solid State Physics [7], Statistical Physics (for example systems exhibiting self-organized criticality [8]), Cardiology [9,10], Earth Sciences [11] (Geophysics, Seismology), Environmental Sciences (e.g. see Ref. [12]), etc. Other groups have proposed and developed a network approach to earthquake events with encouraging results. A recent study [13] reveals that this approach is strengthened if we combine it with natural time analysis. In particular, we find [13,14] that the study of the spatial distribution of the variability [15] of the order parameter fluctuations, defined in natural time, provides important information on the dynamical evolution of the system. 1. P. Varotsos, N. Sarlis, and E. Skordas, Practica of Athens Academy, 76, 294-321, 2001. 2. P.A. Varotsos, N.V. Sarlis, and E.S. Skordas, Phys. Rev. E, 66, 011902 , 2002. 3. S. Abe, N.V. Sarlis, E.S. Skordas, H.K. Tanaka and P.A. Varotsos, Phys. Rev. Lett. 94, 170601, 2005. 4. P.A. Varotsos, N.V. Sarlis, E.S. Skordas, H.K. Tanaka and M.S. Lazaridou, Phys. Rev. E, 74, 021123, 2006. 5. P.Varotsos, N. V. Sarlis, E. S. Skordas
Climate Time Series Analysis and Forecasting
NASA Astrophysics Data System (ADS)
Young, P. C.; Fildes, R.
2009-04-01
This paper will discuss various aspects of climate time series data analysis, modelling and forecasting being carried out at Lancaster. This will include state-dependent parameter, nonlinear, stochastic modelling of globally averaged atmospheric carbon dioxide; the computation of emission strategies based on modern control theory; and extrapolative time series benchmark forecasts of annual average temperature, both global and local. The key to the forecasting evaluation will be the iterative estimation of forecast error based on rolling origin comparisons, as recommended in the forecasting research literature. The presentation will conclude with with a comparison of the time series forecasts with forecasts produced from global circulation models and a discussion of the implications for climate modelling research.
Time fluctuation analysis of forest fire sequences
NASA Astrophysics Data System (ADS)
Vega Orozco, Carmen D.; Kanevski, Mikhaïl; Tonini, Marj; Golay, Jean; Pereira, Mário J. G.
2013-04-01
Forest fires are complex events involving both space and time fluctuations. Understanding of their dynamics and pattern distribution is of great importance in order to improve the resource allocation and support fire management actions at local and global levels. This study aims at characterizing the temporal fluctuations of forest fire sequences observed in Portugal, which is the country that holds the largest wildfire land dataset in Europe. This research applies several exploratory data analysis measures to 302,000 forest fires occurred from 1980 to 2007. The applied clustering measures are: Morisita clustering index, fractal and multifractal dimensions (box-counting), Ripley's K-function, Allan Factor, and variography. These algorithms enable a global time structural analysis describing the degree of clustering of a point pattern and defining whether the observed events occur randomly, in clusters or in a regular pattern. The considered methods are of general importance and can be used for other spatio-temporal events (i.e. crime, epidemiology, biodiversity, geomarketing, etc.). An important contribution of this research deals with the analysis and estimation of local measures of clustering that helps understanding their temporal structure. Each measure is described and executed for the raw data (forest fires geo-database) and results are compared to reference patterns generated under the null hypothesis of randomness (Poisson processes) embedded in the same time period of the raw data. This comparison enables estimating the degree of the deviation of the real data from a Poisson process. Generalizations to functional measures of these clustering methods, taking into account the phenomena, were also applied and adapted to detect time dependences in a measured variable (i.e. burned area). The time clustering of the raw data is compared several times with the Poisson processes at different thresholds of the measured function. Then, the clustering measure value
Timing analysis of PWR fuel pin failures
Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J. ); Straka, M. )
1992-09-01
Research has been conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin bumup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PFL/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design bumup. Using peaking factors commensurate widi actual bumups would result in longer intervals for both reactor designs. This document also contains appendices A through J of this report.
Analysis of Polyphonic Musical Time Series
NASA Astrophysics Data System (ADS)
Sommer, Katrin; Weihs, Claus
A general model for pitch tracking of polyphonic musical time series will be introduced. Based on a model of Davy and Godsill (Bayesian harmonic models for musical pitch estimation and analysis, Technical Report 431, Cambridge University Engineering Department, 2002) Davy and Godsill (2002) the different pitches of the musical sound are estimated with MCMC methods simultaneously. Additionally a preprocessing step is designed to improve the estimation of the fundamental frequencies (A comparative study on polyphonic musical time series using MCMC methods. In C. Preisach et al., editors, Data Analysis, Machine Learning, and Applications, Springer, Berlin, 2008). The preprocessing step compares real audio data with an alphabet constructed from the McGill Master Samples (Opolko and Wapnick, McGill University Master Samples [Compact disc], McGill University, Montreal, 1987) and consists of tones of different instruments. The tones with minimal Itakura-Saito distortion (Gray et al., Transactions on Acoustics, Speech, and Signal Processing ASSP-28(4):367-376, 1980) are chosen as first estimates and as starting points for the MCMC algorithms. Furthermore the implementation of the alphabet is an approach for the recognition of the instruments generating the musical time series. Results are presented for mixed monophonic data from McGill and for self recorded polyphonic audio data.
Parametric Differentiation and Integration
ERIC Educational Resources Information Center
Chen, Hongwei
2009-01-01
Parametric differentiation and integration under the integral sign constitutes a powerful technique for calculating integrals. However, this topic is generally not included in the undergraduate mathematics curriculum. In this note, we give a comprehensive review of this approach, and show how it can be systematically used to evaluate most of the…
No Time for Dead Time: Timing Analysis of Bright Black Hole Binaries with NuSTAR
NASA Astrophysics Data System (ADS)
Bachetti, Matteo; Harrison, Fiona A.; Cook, Rick; Tomsick, John; Schmid, Christian; Grefenstette, Brian W.; Barret, Didier; Boggs, Steven E.; Christensen, Finn E.; Craig, William W.; Fabian, Andrew C.; Fürst, Felix; Gandhi, Poshak; Hailey, Charles J.; Kara, Erin; Maccarone, Thomas J.; Miller, Jon M.; Pottschmidt, Katja; Stern, Daniel; Uttley, Phil; Walton, Dominic J.; Wilms, Jörn; Zhang, William W.
2015-02-01
Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (~2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploit the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339-4, Cyg X-1, and GRS 1915+105.
Multifractal analysis of polyalanines time series
NASA Astrophysics Data System (ADS)
Figueirêdo, P. H.; Nogueira, E.; Moret, M. A.; Coutinho, Sérgio
2010-05-01
Multifractal properties of the energy time series of short α-helix structures, specifically from a polyalanine family, are investigated through the MF-DFA technique ( multifractal detrended fluctuation analysis). Estimates for the generalized Hurst exponent h(q) and its associated multifractal exponents τ(q) are obtained for several series generated by numerical simulations of molecular dynamics in different systems from distinct initial conformations. All simulations were performed using the GROMOS force field, implemented in the program THOR. The main results have shown that all series exhibit multifractal behavior depending on the number of residues and temperature. Moreover, the multifractal spectra reveal important aspects of the time evolution of the system and suggest that the nucleation process of the secondary structures during the visits on the energy hyper-surface is an essential feature of the folding process.
The system of EAS time analysis
NASA Technical Reports Server (NTRS)
Khalafyan, A. Z.; Oganezova, J. S.; Bashindjaghayan, G. L.; Mkhitaryan, V. M.; Sinev, N. B.; Sarycheva, L. I.
1985-01-01
The extensive air showers' (EAS) front shape, angle of incidence, disk thickness, particle distribution along the shower, on the delayed and EAS front advancing particles were determined. The suggested system of the EAS time analysis allows determination of the whole EAS longitudinal structure at the observation points. The information from the detectors is continuously recorded in the memory with the memory cell switching in 5 ns, this enables fixation of the moment of pulse input from the detector with an accuracy to + or - 2.5 ns. Along with the fast memory, a slow memory with the cell switching in 1 micron s is introduced in the system, this permits observation of relatively large time intervals with respect to the trigger pulse with an appropriately lower accuracy.
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Sliced Inverse Regression for Time Series Analysis
NASA Astrophysics Data System (ADS)
Chen, Li-Sue
1995-11-01
In this thesis, general nonlinear models for time series data are considered. A basic form is x _{t} = f(beta_sp{1} {T}X_{t-1},beta_sp {2}{T}X_{t-1},... , beta_sp{k}{T}X_ {t-1},varepsilon_{t}), where x_{t} is an observed time series data, X_{t } is the first d time lag vector, (x _{t},x_{t-1},... ,x _{t-d-1}), f is an unknown function, beta_{i}'s are unknown vectors, varepsilon_{t }'s are independent distributed. Special cases include AR and TAR models. We investigate the feasibility applying SIR/PHD (Li 1990, 1991) (the sliced inverse regression and principal Hessian methods) in estimating beta _{i}'s. PCA (Principal component analysis) is brought in to check one critical condition for SIR/PHD. Through simulation and a study on 3 well -known data sets of Canadian lynx, U.S. unemployment rate and sunspot numbers, we demonstrate how SIR/PHD can effectively retrieve the interesting low-dimension structures for time series data.
Unifying parametrized VLSI Jacobi algorithms and architectures
NASA Astrophysics Data System (ADS)
Deprettere, Ed F. A.; Moonen, Marc
1993-11-01
Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.
Optimization of noncollinear optical parametric amplification
NASA Astrophysics Data System (ADS)
Schimpf, D. N.; Rothardt, J.; Limpert, J.; Tünnermann, A.
2007-02-01
Noncollinearly phase-matched optical parametric amplifiers (NOPAs) - pumped with the green light of a frequency doubled Yb-doped fiber-amplifier system 1, 2 - permit convenient generation of ultrashort pulses in the visible (VIS) and near infrared (NIR) 3. The broad bandwidth of the parametric gain via the noncollinear pump configuration allows amplification of few-cycle optical pulses when seeded with a spectrally flat, re-compressible signal. The short pulses tunable over a wide region in the visible permit transcend of frontiers in physics and lifescience. For instance, the resulting high temporal resolution is of significance for many spectroscopic techniques. Furthermore, the high magnitudes of the peak-powers of the produced pulses allow research in high-field physics. To understand the demands of noncollinear optical parametric amplification using a fiber pump source, it is important to investigate this configuration in detail 4. An analysis provides not only insight into the parametric process but also determines an optimal choice of experimental parameters for the objective. Here, the intention is to design a configuration which yields the shortest possible temporal pulse. As a consequence of this analysis, the experimental setup could be optimized. A number of aspects of optical parametric amplifier performance have been treated analytically and computationally 5, but these do not fully cover the situation under consideration here.
Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Vu; Duong, Tuan
2005-01-01
A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.
The report gives results of a recent analysis showing that cost- effective indoor radon reduction technology is required for houses with initial radon concentrations < 4 pCi/L, because 78-86% of the national lung cancer risk due to radon is associated with those houses. ctive soi...
NASA Astrophysics Data System (ADS)
Miyazaki, Ryoichi; Saruwatari, Hiroshi; Shikano, Kiyohiro
We propose a structure-generalized blind spatial subtraction array (BSSA), and the theoretical analysis of the amounts of musical noise and speech distortion. The structure of BSSA should be selected according to the application, i.e., a channelwise BSSA is recommended for listening but a conventional BSSA is suitable for speech recognition.
Time-dependent accident sequence analysis
Chu, T.L.
1983-01-01
One problem of the current event tree methodology is that the transitions between accident sequences are not modeled. The causes of transitions are mostly due to operator actions during an accident. A model for such transitions is presented. A generalized algorithm is used for quantification. In the more realistic accident analysis, the progression of the physical processes, which determines the time available for proper operators response, is modeled. Furthermore, the uncertainty associated with the physical modeling is considered. As an example, the approach is applied to analyze TMI-type accidents. Statistical evidence is collected and used in assessing the frequency of stuck-open pressure operated relief valve at B and W plants as well as the frequency of misdiagnosis. Statistical data are also used in modeling the timing of operator actions during the accident. A thermal code (CUT) is developed to determine the time at which the core uncovery occurs. A response surface is used to propagate the uncertainty associated with the thermal code.
A parametric analysis of waves propagating in a porous solid saturated by a three-phase fluid.
Santos, Juan E; Savioli, Gabriela B
2015-11-01
This paper presents an analysis of a model for the propagation of waves in a poroelastic solid saturated by a three-phase viscous, compressible fluid. The constitutive relations and the equations of motion are stated first. Then a plane wave analysis determines the phase velocities and attenuation coefficients of the four compressional waves and one shear wave that propagate in this type of medium. A procedure to compute the elastic constants in the constitutive relations is defined next. Assuming the knowledge of the shear modulus of the dry matrix, the other elastic constants in the stress-strain relations are determined by employing ideal gedanken experiments generalizing those of Biot's theory for single-phase fluids. These experiments yield expressions for the elastic constants in terms of the properties of the individual solid and fluids phases. Finally the phase velocities and attenuation coefficients of all waves are computed for a sample of Berea sandstone saturated by oil, gas, and water. PMID:26627777
Analysis of the time scales in time periodic Darcy flows
NASA Astrophysics Data System (ADS)
Zhu, T.; Waluga, C.; Wohlmuth, B.; Manhart, M.
2014-12-01
We investigate unsteady flow in a porous medium under time - periodic (sinusoidal) pressure gradient. DNS were performed to benchmark the analytical solution of the unsteady Darcy equation with two different expressions of the time scale : one given by a consistent volume averaging of the Navier - Stokes equation [1] with a steady state closure for the flow resistance term, another given by volume averaging of the kinetic energy equation [2] with a closure for the dissipation rate . For small and medium frequencies, the analytical solutions with the time scale obtained by the energy approach compare well with the DNS results in terms of amplitude and phase lag. For large frequencies (f > 100 [Hz]) we observe a slightly smaller damping of the amplitude. This study supports the use of the unsteady form of Darcy's equation with constant coefficients to solve time - periodic Darcy flows at low and medium frequencies. Our DNS simulations, however, indicate that the time scale predicted by the VANS approach together with a steady - state closure for the flow resistance term is too small. The one obtained by the energy approach matches the DNS results well. At large frequencies, the amplitudes deviate slightly from the analytical solution of the unsteady Darcy equation. Note that at those high frequencies, the flow amplitudes remain below 1% of those of steady state flow. This result indicates that unsteady porous media flow can approximately be described by the unsteady Darcy equation with constant coefficients for a large range of frequencies, provided, the proper time scale has been found.
Ultra-Broad-Band Optical Parametric Amplifier or Oscillator
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry; Matsko, Andrey; Savchenkov, Anatolly; Maleki, Lute
2009-01-01
A concept for an ultra-broad-band optical parametric amplifier or oscillator has emerged as a by-product of a theoretical study in fundamental quantum optics. The study was originally intended to address the question of whether the two-photon temporal correlation function of light [in particular, light produced by spontaneous parametric down conversion (SPDC)] can be considerably narrower than the inverse of the spectral width (bandwidth) of the light. The answer to the question was found to be negative. More specifically, on the basis of the universal integral relations between the quantum two-photon temporal correlation and the classical spectrum of light, it was found that the lower limit of two-photon correlation time is set approximately by the inverse of the bandwidth. The mathematical solution for the minimum two-photon correlation time also provides the minimum relative frequency dispersion of the down-converted light components; in turn, the minimum relative frequency dispersion translates to the maximum bandwidth, which is important for the design of an ultra-broad-band optical parametric oscillator or amplifier. In the study, results of an analysis of the general integral relations were applied in the case of an optically nonlinear, frequency-dispersive crystal in which SPDC produces collinear photons. Equations were found for the crystal orientation and pump wavelength, specific for each parametric-down-converting crystal, that eliminate the relative frequency dispersion of collinear degenerate (equal-frequency) signal and idler components up to the fourth order in the frequency-detuning parameter
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Time delay analysis. 417.221 Section 417... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean...
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Time delay analysis. 417.221 Section...
Time series analysis of temporal networks
NASA Astrophysics Data System (ADS)
Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh
2016-01-01
A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue
Parametric Explosion Spectral Model
Ford, S R; Walter, W R
2012-01-19
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.
NASA Astrophysics Data System (ADS)
Ramesh, Azadeh; Glade, Thomas; Malet, Jean-Philippe
2010-09-01
The existence of a trend in hydrological and meteorological time series is detected by statistical tests. The trend analysis of hydrological and meteorological series is important to consider, because of the effects of global climate change. Parametric or non-parametric statistical tests can be used to decide whether there is a statistically significant trend. In this paper, first a homogeneity analysis was performed by using the non-parametric Bartlett test. Then, trend detection was estimated by using non-parametric Mann-Kendall test. The null hypothesis in the Mann-Kendall test is that the data are independent and randomly ordered. The result of Mann-Kendall test was compared with the parametric T-Test for finding the existence of trend. To reach this purpose, the significance of trends was analyzed on monthly data of Ubaye river in Barcelonnette watershed in southeast of France at an elevation of 1132 m (3717 ft) during the period from 1928 to 2009 bases with the nonparametric Mann-Kendall test and parametric T-Test for river discharge and for meteorological data. The result shows that a rainfall event does not necessarily have an immediate impact on discharge. Visual inspection suggests that the correlation between observations made at the same time point is not very strong. In the results of the trend tests the p-value of the discharge is slightly smaller than the p-value of the precipitation but it seems that in both there is no statistically significant trend. In statistical hypothesis testing, a test statistic is a numerical summary of a set of data that reduces the data to one or a small number of values that can be used to perform a hypothesis test. Statistical hypothesis testing is determined if there is a significant trend or not. Negative test statistics and MK test in both precipitation and discharge data indicate downward trends. As conclusion we can say extreme flood event during recent years is strongly depending on: 1) location of the city: It is
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1974-01-01
The capabilities are analyzed of a real aperture, forward-looking imaging radar for use as an independent landing monitor, which will provide the pilot with an independent means of assessing the progress of an automatic landing during Category 3 operations. The analysis shows that adequate ground resolution and signal-to-noise ratio can be obtained to image a runway with grassy surroundings using a radar operating at 35 GHz in good weather and in most fog but that performance is severely degraded in moderate to heavy rain and wet snow. Weather effects on a 10 GHz imager are not serious, with the possible exception of very heavy rain, but the azimuthal resolution at 10 GHz is inadequate with antennas up to 2 m long.
NASA Astrophysics Data System (ADS)
Abbas, Musharaf; Hasham, Hasan Junaid; Baig, Yasir
2016-02-01
Numerical-based finite element investigation has been conducted to explain the effect of bond coat thickness on stress distribution in traditional and nanostructured yttria-stabilized zirconia (YSZ)-based thermal barrier coatings (TBC). Stress components have been determined to quantitatively analyze the mechanical response of both kinds of coatings under the thermal shock effect. It has been found that maximum radial tensile and compressive stresses that exist at thermally grown oxide (TGO)/bond coat interface and within TGO respectively decrease with an increase in bond coat thickness. Effect of bond coat thickness on axial tensile stresses is not significant. However, axial compressive stresses that exist at the edge of the specimen near bond coat/substrate interface decrease appreciably with the increase in bond coat thickness. Residual stress profile as a function of bond coat thickness is further explained for comparative analysis of both coatings to draw some useful conclusions helpful in failure studies of TBCs.
Yovel, Yossi; Assaf, Yaniv
2007-03-01
Individual mapping of cerebral, morphological, functionally related structures using MRI was carried out using a new multi-contrast acquisition and analysis framework, called virtual-dot-com imaging. So far, conventional anatomical MRI has been able to provide gross segmentation of gray/white matter boundaries and a few sub-cortical structures. By combining a handful of imaging contrasts mechanisms (T1, T2, magnetization transfer, T2* and proton density), we were able to further segment sub-cortical tissue to its sub-nuclei arrangement, a segmentation that is difficult based on conventional, single-contrast MRI. Using an automatic four-step image and signal processing algorithm, we segmented the thalamus to at least 7 sub-nuclei with high similarity across subjects and high statistical significance within subjects (p<0.0001). The identified sub-nuclei resembled the known anatomical arrangement of the thalamus given in various atlases. Each cluster was characterized by a unique MRI contrast fingerprint. With this procedure, the weighted proportions of the different cellular compartments could be estimated, a property available to date only by histological analysis. Each sub-nucleus could be characterized in terms of normalized MRI contrast and compared to other sub-nuclei. The different weights of the contrasts (T1/T2/T2*/PD/MT, etc.) for each sub-nuclei cluster might indicate the intra-cluster morphological arrangement of the tissue that it represents. The implications of this methodology are far-ranging, from non-invasive, in vivo, individual mapping of histologically distinct brain areas to automatic identification of pathological processes. PMID:17208461
NASA Astrophysics Data System (ADS)
Ahmadian, Mehdi; Blanchard, Emmanuel
2011-02-01
This article provides a non-dimensionalised closed-form analysis of semi-active vehicle suspensions, using a quarter-car model. The derivation of the closed-form solutions for three indices that can be used for ride comfort, vehicle handling, and stability are presented based on non-dimensionalised suspension parameters. The behaviour of semi-active vehicle suspensions is evaluated using skyhook, groundhook, and hybrid control policies, and compared with passive suspensions. The relationship between vibration isolation, suspension deflection, and road holding is studied, using three performance indices based on the mean square of the sprung mass acceleration, rattle space, and tyre deflection, respectively. The results of the study indicate that the hybrid control policy yields significantly better comfort than a passive suspension, without reducing the road-holding quality or increasing the suspension displacement for typical passenger cars. The results also indicate that for typical passenger cars, the hybrid control policy results in a better compromise between comfort, road holding and suspension travel requirements than both the skyhook and groundhook control methods.
Parametric instabilities in the LCGT arm cavity
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Uchiyama, T.; Miyoki, S.; Ohashi, M.; Kuroda, K.; Numata, K.
2008-07-01
We evaluated the parametric instabilities of LCGT (Japanese interferometric gravitational wave detector project) arm cavity. The number of unstable modes of LCGT is 10-times smaller than that of Advanced LIGO (USA). Since the strength of the instabilities of LCGT depends on the mirror curvature more weakly than that of Advanced LIGO, the requirement of the mirror curvature accuracy is easier to be achieved. The difference in the parametric instabilities between LCGT and Advanced LIGO is because of the thermal noise reduction methods (LCGT, cooling sapphire mirrors; Advanced LIGO, fused silica mirrors with larger laser beams), which are the main strategies of the projects. Elastic Q reduction by the barrel surface (0.2 mm thickness Ta2O5) coating is effective to suppress instabilities in the LCGT arm cavity. Therefore, the cryogenic interferometer is a smart solution for the parametric instabilities in addition to thermal noise and thermal lensing.
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Simple heterogeneity parametrization for sea surface temperature and chlorophyll
NASA Astrophysics Data System (ADS)
Skákala, Jozef; Smyth, Timothy J.
2016-06-01
Using satellite maps this paper offers a complex analysis of chlorophyll & SST heterogeneity in the shelf seas around the southwest of the UK. The heterogeneity scaling follows a simple power law and is consequently parametrized by two parameters. It is shown that in most cases these two parameters vary only relatively little with time. The paper offers a detailed comparison of field heterogeneity between different regions. How much heterogeneity is in each region preserved in the annual median data is also determined. The paper explicitly demonstrates how one can use these results to calculate representative measurement area for in situ networks.
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
Multifractal Time Series Analysis Based on Detrended Fluctuation Analysis
NASA Astrophysics Data System (ADS)
Kantelhardt, Jan; Stanley, H. Eugene; Zschiegner, Stephan; Bunde, Armin; Koscielny-Bunde, Eva; Havlin, Shlomo
2002-03-01
In order to develop an easily applicable method for the multifractal characterization of non-stationary time series, we generalize the detrended fluctuation analysis (DFA), which is a well-established method for the determination of the monofractal scaling properties and the detection of long-range correlations. We relate the new multifractal DFA method to the standard partition function-based multifractal formalism, and compare it to the wavelet transform modulus maxima (WTMM) method which is a well-established, but more difficult procedure for this purpose. We employ the multifractal DFA method to determine if the heartrhythm during different sleep stages is characterized by different multifractal properties.
NASA Technical Reports Server (NTRS)
Rosu, Grigore (Inventor); Chen, Feng (Inventor); Chen, Guo-fang; Wu, Yamei; Meredith, Patrick O. (Inventor)
2014-01-01
A program trace is obtained and events of the program trace are traversed. For each event identified in traversing the program trace, a trace slice of which the identified event is a part is identified based on the parameter instance of the identified event. For each trace slice of which the identified event is a part, the identified event is added to an end of a record of the trace slice. These parametric trace slices can be used in a variety of different manners, such as for monitoring, mining, and predicting.
NASA Astrophysics Data System (ADS)
Ahmad, Waqas; Fatima, Aamira; Awan, Usman Khalid; Anwar, Arif
2014-11-01
The Indus basin of Pakistan is vulnerable to climate change which would directly affect the livelihoods of poor people engaged in irrigated agriculture. The situation could be worse in middle and lower part of this basin which occupies 90% of the irrigated area. The objective of this research is to analyze the long term meteorological trends in the middle and lower parts of Indus basin of Pakistan. We used monthly data from 1971 to 2010 and applied non-parametric seasonal Kendal test for trend detection in combination with seasonal Kendall slope estimator to quantify the magnitude of trends. The meteorological parameters considered were mean maximum and mean minimum air temperature, and rainfall from 12 meteorological stations located in the study region. We examined the reliability and spatial integrity of data by mass-curve analysis and spatial correlation matrices, respectively. Analysis was performed for four seasons (spring-March to May, summer-June to August, fall-September to November and winter-December to February). The results show that max. temperature has an average increasing trend of magnitude + 0.16, + 0.03, 0.0 and + 0.04 °C/decade during all the four seasons, respectively. The average trend of min. temperature during the four seasons also increases with magnitude of + 0.29, + 0.12, + 0.36 and + 0.36 °C/decade, respectively. Persistence of the increasing trend is more pronounced in the min. temperature as compared to the max. temperature on annual basis. Analysis of rainfall data has not shown any noteworthy trend during winter, fall and on annual basis. However during spring and summer season, the rainfall trends vary from - 1.15 to + 0.93 and - 3.86 to + 2.46 mm/decade, respectively. It is further revealed that rainfall trends during all seasons are statistically non-significant. Overall the study area is under a significant warming trend with no changes in rainfall.
2012-01-01
Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12) – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales). An illustration of ordinal item analysis confirmed that all 14
Progress in optical parametric oscillators
NASA Technical Reports Server (NTRS)
Fan, Y. X.; Byer, R. L.
1984-01-01
It is pointed out that tunable coherent sources are very useful for many applications, including spectroscopy, chemistry, combustion diagnostics, and remote sensing. Compared with other tunable sources, optical parametric oscillators (OPO) offer the potential advantage of a wide wavelength operating range, which extends from 0.2 micron to 25 microns. The current status of OPO is examined, taking into account mainly advances made during the last decade. Attention is given to early LiNbO3 parametric oscillators, problems which have prevented wide use of parametric oscillators, the demonstration of OPO's using urea and AgGaS2, progress related to picosecond OPO's, a breakthrough in nanosecond parametric oscillators, the first demonstration of a waveguide and fiber parametric amplification and generation, the importance of chalcopyrite crystals, and theoretical work performed with the aim to understand the factors affecting the parametric oscillator performance.
Real-time Forensic Disaster Analysis
NASA Astrophysics Data System (ADS)
Wenzel, F.; Daniell, J.; Khazai, B.; Mühr, B.; Kunz-Plapp, T.; Markus, M.; Vervaeck, A.
2012-04-01
The Center for Disaster Management and Risk Reduction Technology (CEDIM, www.cedim.de) - an interdisciplinary research center founded by the German Research Centre for Geoscience (GFZ) and Karlsruhe Institute of Technology (KIT) - has embarked on a new style of disaster research known as Forensic Disaster Analysis. The notion has been coined by the Integrated Research on Disaster Risk initiative (IRDR, www.irdrinternational.org) launched by ICSU in 2010. It has been defined as an approach to studying natural disasters that aims at uncovering the root causes of disasters through in-depth investigations that go beyond the reconnaissance reports and case studies typically conducted after disasters. In adopting this comprehensive understanding of disasters CEDIM adds a real-time component to the assessment and evaluation process. By comprehensive we mean that most if not all relevant aspects of disasters are considered and jointly analysed. This includes the impact (human, economy, and infrastructure), comparisons with recent historic events, social vulnerability, reconstruction and long-term impacts on livelihood issues. The forensic disaster analysis research mode is thus best characterized as "event-based research" through systematic investigation of critical issues arising after a disaster across various inter-related areas. The forensic approach requires (a) availability of global data bases regarding previous earthquake losses, socio-economic parameters, building stock information, etc.; (b) leveraging platforms such as the EERI clearing house, relief-web, and the many sources of local and international sources where information is organized; and (c) rapid access to critical information (e.g., crowd sourcing techniques) to improve our understanding of the complex dynamics of disasters. The main scientific questions being addressed are: What are critical factors that control loss of life, of infrastructure, and for economy? What are the critical interactions
CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM
NASA Technical Reports Server (NTRS)
Mccluney, K.
1994-01-01
In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however
Problems of the design of low-noise input devices. [parametric amplifiers
NASA Technical Reports Server (NTRS)
Manokhin, V. M.; Nemlikher, Y. A.; Strukov, I. A.; Sharfov, Y. A.
1974-01-01
An analysis is given of the requirements placed on the elements of parametric centimeter waveband amplifiers for achievement of minimal noise temperatures. A low-noise semiconductor parametric amplifier using germanium parametric diodes for a receiver operating in the 4 GHz band was developed and tested confirming the possibility of satisfying all requirements.
Finite difference time domain analysis of microwave ferrite devices and mobile antenna systems
NASA Astrophysics Data System (ADS)
Yildirim, Bahadir Suleyman
This dissertation presents analysis and design of shielded mobile antenna systems and microwave ferrite devices using a finite-difference time-domain method. Novel shielded antenna structures suitable for cellular communications have been analyzed and designed with emphasize on reducing excessive radiated energy absorbed in user's head and hand, while keeping the antenna performance at its peak in the presence of user. These novel antennas include a magnetically shielded antenna, a dual-resonance shielded antenna and, a shorted and truncated microstrip antenna. The effect of magnetic coating on the performance of a shielded monopole antenna is studied extensively. A parametric study is performed to analyze the dual-resonance phenomenon observed in the dual-resonance shielded antenna, optimize the antenna design within the cellular communications band, and improve the antenna performance. Input impedance, near and far fields of the dual-resonance shielded antenna are calculated using the finite-difference time-domain method. Experimental validation is also presented. In addition, performance of a shorted and truncated microstrip antenna has been investigated over a wide range of substrate parameters and dimensions. Objectives of the research work also include development of a finite-difference time-domain technique to accurately model magnetically anisotropic media, including the effect of non-uniform magnetization within the finite-size ferrite material due to demagnetizing fields. A slow wave thin film isolator and a stripline disc junction circulator are analyzed. An extensive parametric study calculates wide-band frequency-dependent parameters of these devices for various device dimensions and material parameters. Finally, a ferrite-filled stripline configuration is analyzed to study the non- linear behaviour of ferrite by introducing a modified damping factor.
Gamma bang time analysis at OMEGA
McEvoy, A. M.; Herrmann, H. W.; Young, C. S.; Mack, J. M.; Kim, Y.; Evans, S.; Sedillo, T.; Horsfield, C. J.; Rubery, M.; Miller, E. K.; Stoeffl, W.; Ali, Z. A.
2010-10-15
Absolute bang time measurements with the gas Cherenkov detector (GCD) and gamma reaction history (GRH) diagnostic have been performed to high precision at the OMEGA laser facility at the University of Rochester with bang time values for the two diagnostics agreeing to within 5 ps on average. X-ray timing measurements of laser-target coupling were used to calibrate a facility-generated laser timing fiducial with rms spreads in the measured coupling times of 9 ps for both GCD and GRH. Increased fusion yields at the National Ignition Facility (NIF) will allow for improved measurement precision with the GRH easily exceeding NIF system design requirements.
NASA Technical Reports Server (NTRS)
Chiao, Raymond Y.; Kwiat, Paul G.; Steinberg, Aephraim M.
1992-01-01
The energy-time uncertainty principle is on a different footing than the momentum position uncertainty principle: in contrast to position, time is a c-number parameter, and not an operator. As Aharonov and Bohm have pointed out, this leads to different interpretations of the two uncertainty principles. In particular, one must distinguish between an inner and an outer time in the definition of the spread in time, delta t. It is the inner time which enters the energy-time uncertainty principle. We have checked this by means of a correlated two-photon light source in which the individual energies of the two photons are broad in spectra, but in which their sum is sharp. In other words, the pair of photons is in an entangled state of energy. By passing one member of the photon pair through a filter with width delta E, it is observed that the other member's wave packet collapses upon coincidence detection to a duration delta t, such that delta E(delta t) is approximately equal to planks constant/2 pi, where this duration delta t is an inner time, in the sense of Aharonov and Bohm. We have measured delta t by means of a Michelson interferometer by monitoring the visibility of the fringes seen in coincidence detection. This is a nonlocal effect, in the sense that the two photons are far away from each other when the collapse occurs. We have excluded classical-wave explanations of this effect by means of triple coincidence measurements in conjunction with a beam splitter which follows the Michelson interferometer. Since Bell's inequalities are known to be violated, we believe that it is also incorrect to interpret this experimental outcome as if energy were a local hidden variable, i.e., as if each photon, viewed as a particle, possessed some definite but unknown energy before its detection.
Afterward: keeping analysis alive over time.
Kantrowitz, Judy L
2012-10-01
Development of a self-analytic function has historically been a goal of psychoanalysis. This article draws on interviews with former analysands to examine ways in which self-exploration continued after analysis. Former analysands who did not report ongoing self-exploration had not necessarily failed to benefit from analysis, nor had they not continued to benefit and grow after analysis ended. The author reflects on different ways of assimilating the analytic process and the analytic relationship, and self-analysis as a criterion by which to judge the success of analytic outcome is reconsidered. PMID:23327002
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... occurs; (2) A flight safety official's decision and reaction time, including variation in human response... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Time delay analysis. 417.221 Section 417... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis....
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occurs; (2) A flight safety official's decision and reaction time, including variation in human response... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Time delay analysis. 417.221 Section 417... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis....
NASA Astrophysics Data System (ADS)
Anishchenko, S. V.; Baryshevsky, V. G.
2015-07-01
We study the features of cooperative parametric (quasi-Cherenkov) radiation arising when initially unmodulated electron (positron) bunches pass through a crystal (natural or artificial) under the conditions of dynamical diffraction of electromagnetic waves in the presence of shot noise. A detailed numerical analysis is given for cooperative THz radiation in artificial crystals. The radiation intensity above 200 MW/cm2 is obtained in simulations. The peak intensity of cooperative radiation emitted at small and large angles to particle velocity is investigated as a function of the current density of an electron bunch. The peak radiation intensity appeared to increase monotonically until saturation is achieved. At saturation, the shot noise causes strong fluctuations in the intensity of cooperative parametric radiation. It is shown that the duration of radiation pulses can be much longer than the particle flight time through the crystal. This enables a thorough experimental investigation of the time structure of cooperative parametric radiation generated by electron bunches available with modern accelerators. The complicated time structure of cooperative parametric (quasi-Cherenkov) radiation can be observed in crystals (natural or artificial) in all spectral ranges (X-ray, optical, terahertz, and microwave).
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Time delay analysis. 417.221 Section 417.221 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include...
Non-Parametric Collision Probability for Low-Velocity Encounters
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2007-01-01
An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.
Singular spectrum analysis for time series with missing data
Schoellhamer, D.H.
2001-01-01
Geophysical time series often contain missing data, which prevents analysis with many signal processing and multivariate tools. A modification of singular spectrum analysis for time series with missing data is developed and successfully tested with synthetic and actual incomplete time series of suspended-sediment concentration from San Francisco Bay. This method also can be used to low pass filter incomplete time series.
Statistical Evaluation of Time Series Analysis Techniques
NASA Technical Reports Server (NTRS)
Benignus, V. A.
1973-01-01
The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.
Johnson, R.N.
1981-10-20
A method and apparatus for converting thermal energy into mechanical energy by parametric pumping of rotary inertia. In a preferred embodiment, a modified tesla turbine rotor is positioned within a rotary boiler along its axis of rotation. An external heat source, such as solar radiation, is directed onto the outer casing of the boiler to convert the liquid to steam. As the steam spirals inwardly toward the discs of the rotor, the moment of inertia of the mass of steam is reduced to thereby substantially increase its kinetic energy. The laminar flow of steam between the discs of the rotor transfers the increased kinetic energy to the rotor which can be coupled out through an output shaft to perform mechanical work. A portion of the mechanical output can be fed back to maintain rotation of the boiler.
Mechanical Parametric Oscillations and Waves
ERIC Educational Resources Information Center
Dittrich, William; Minkin, Leonid; Shapovalov, Alexander S.
2013-01-01
Usually parametric oscillations are not the topic of general physics courses. Probably it is because the mathematical theory of this phenomenon is relatively complicated, and until quite recently laboratory experiments for students were difficult to implement. However parametric oscillations are good illustrations of the laws of physics and can be…
Analysis of precipitation appearance in time
NASA Astrophysics Data System (ADS)
Bonacci, O.; Matean, D.
1999-08-01
This paper analyses precipitation occurrence in time. The calculations were made with the data from continuous precipitation measurements by two automatic float-type rainfall recorders (Hellmann type) during the 10-year period 1984-1993. The measurement increment was 5 minutes with 0.1 mm resolution. The effect of different time increments on precipitation duration in a year has been researched. Calculations show that a smaller time increment diminishes the duration of precipitation in a year. If a 5-minute time increment is used for calculation, the precipitation duration is about 3% of the year. If a 24-hour time increment is used, the precipitation duration is 33% of the year. The real mean duration of yearly precipitation has been evaluated as 216 hours, that is 2.47% of the year. The appearance of a precipitation intensity higher than 0·2 mm/min has been researched during the year and over 24 hours. Analyses show that intensive precipitation appears during the warmer part of the year, from June to August. The precipitation distribution is not uniform over a day. In the city of Zagreb, where both rain gauge stations are situated, in 90% of the cases, the precipitation intensity higher than 1·2 mm/min falls during the night, from 9 p.m. to 1 a.m., at the same time causing floods.
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
Evolutionary factor analysis of replicated time series.
Motta, Giovanni; Ombao, Hernando
2012-09-01
In this article, we develop a novel method that explains the dynamic structure of multi-channel electroencephalograms (EEGs) recorded from several trials in a motor-visual task experiment. Preliminary analyses of our data suggest two statistical challenges. First, the variance at each channel and cross-covariance between each pair of channels evolve over time. Moreover, the cross-covariance profiles display a common structure across all pairs, and these features consistently appear across all trials. In the light of these features, we develop a novel evolutionary factor model (EFM) for multi-channel EEG data that systematically integrates information across replicated trials and allows for smoothly time-varying factor loadings. The individual EEGs series share common features across trials, thus, suggesting the need to pool information across trials, which motivates the use of the EFM for replicated time series. We explain the common co-movements of EEG signals through the existence of a small number of common factors. These latent factors are primarily responsible for processing the visual-motor task which, through the loadings, drive the behavior of the signals observed at different channels. The estimation of the time-varying loadings is based on the spectral decomposition of the estimated time-varying covariance matrix. PMID:22364516
Event/Time/Availability/Reliability-Analysis Program
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Hoffman, D. J.; Carr, Thomas
1994-01-01
ETARA is interactive, menu-driven program that performs simulations for analysis of reliability, availability, and maintainability. Written to evaluate performance of electrical power system of Space Station Freedom, but methodology and software applied to any system represented by block diagram. Program written in IBM APL.
Intersection of parametric surfaces using lookup tables
NASA Technical Reports Server (NTRS)
Hanna, S. L.; Abel, J. F.; Greenberg, D. P.
1984-01-01
When primitive structures in the form of parametric surfaces are combined and modified interactively to form complex intersecting surfaces, it becomes important to find the curves of intersection. One must distinguish between finding the shape of the intersection curve, which may only be useful for display purposes, and finding an accurate mathematical representation of the curve, which is important for any meaningful geometric modeling, analysis, design, or manufacturing involving the intersection. The intersection curve between two or more parametric surfaces is important in a variety of computer-aided design and manufacture areas. A few examples are shape design, analysis of groins, design of fillets, and computation of numerically controlled tooling paths. The algorithm presented here provides a mathematical representation of the intersection curve to a specified accuracy. It also provides the database that can simplify operations such as hidden-surface removal, surface rendering, profile identification, and interference or clearance computations.
Three Analysis Examples for Time Series Data
Technology Transfer Automated Retrieval System (TEKTRAN)
With improvements in instrumentation and the automation of data collection, plot level repeated measures and time series data are increasingly available to monitor and assess selected variables throughout the duration of an experiment or project. Records and metadata on variables of interest alone o...
SEC sensor parametric test and evaluation system
NASA Technical Reports Server (NTRS)
1978-01-01
This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.
Time Series Analysis Using Geometric Template Matching.
Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina
2013-03-01
We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data. PMID:22641699
Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn
2015-01-01
-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872
Carr, Steven M.; Duggan, Ana T.; Stenson, Garry B.; Marshall, H. Dawn
2015-01-01
-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872
Scalable Hyper-parameter Estimation for Gaussian Process Based Time Series Analysis
Chandola, Varun; Vatsavai, Raju
2010-01-01
Gaussian process (GP) is increasingly becoming popular as a kernel machine learning tool for non-parametric data analysis. Recently, GP has been applied to model non-linear dependencies in time series data. GP based analysis can be used to solve problems of time series prediction, forecasting, missing data imputation, change point detection, anomaly detection, etc. But the use of GP to handle massive scientific time series data sets has been limited, owing to its expensive computational complexity. The primary bottleneck is the handling of the covariance matrix whose size is quadratic in the length of the time series. In this paper we propose a scalable method that exploit the special structure of the covariance matrix for hyper-parameter estimation in GP based learning. The proposed method allows estimation of hyper parameters associated with GP in quadratic time, which is an order of magnitude improvement over standard methods with cubic complexity. Moreover, the proposed method does not require explicit computation of the covariance matrix and hence has memory requirement linear to the length of the time series as opposed to the quadratic memory requirement of standard methods. To further improve the computational complexity of the proposed method, we provide a parallel version to concurrently estimate the log likelihood for a set of time series which is the key step in the hyper-parameter estimation. Performance results on a multi-core system show that our proposed method provides significant speedups as high as 1000, even when running in serial mode, while maintaining a small memory footprint. The parallel version exploits the natural parallelization potential of the serial algorithm and is shown to perform significantly better than the serial faster algorithm, with speedups as high as 10.
Tanagra: Timing Analysis of Grating Data
NASA Astrophysics Data System (ADS)
Kashyap, Vinay; Posson-Brown, Jennifer; Drake, Jeremy J.; Saar, Steven H.; Scargle, Jeffrey D; Connors, Alanna
2014-08-01
We describe the Tanagra dataset, which now contains the complete set of cool stars observed with the Chandra gratings thus far. We have analyzed the datasets to produce a catalog of light curves, flare-like events, measures of spectral variability, evaluations of variability for strong spectral lines, correlations of intensity between high- and low-temperature lines, and other useful observational summaries. We will describe the analysis methods used and the data products obtained, and highlight some special cases.This work has been supported by the Chandra archival grant AR0-11001X.
ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams
Parametric Mass Reliability Study
NASA Technical Reports Server (NTRS)
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
Time series data analysis using DFA
NASA Astrophysics Data System (ADS)
Okumoto, A.; Akiyama, T.; Sekino, H.; Sumi, T.
2014-02-01
Detrended fluctuation analysis (DFA) was originally developed for the evaluation of DNA sequence and interval for heart rate variability (HRV), but it is now used to obtain various biological information. In this study we perform DFA on artificially generated data where we already know the relationship between signal and the physical event causing the signal. We generate artificial data using molecular dynamics. The Brownian motion of a polymer under an external force is investigated. In order to generate artificial fluctuation in the physical properties, we introduce obstacle pillars fixed to nanostructures. Using different conditions such as presence or absence of obstacles, external field, and the polymer length, we perform DFA on energies and positions of the polymer.
Apparatus for statistical time-series analysis of electrical signals
NASA Technical Reports Server (NTRS)
Stewart, C. H. (Inventor)
1973-01-01
An apparatus for performing statistical time-series analysis of complex electrical signal waveforms, permitting prompt and accurate determination of statistical characteristics of the signal is presented.
Time Investment and Time Management: An Analysis of Time Students Spend Working at Home for School
ERIC Educational Resources Information Center
Wagner, Petra; Schober, Barbara; Spiel, Christiane
2008-01-01
This paper deals with the time students spend working at home for school. In Study 1, we investigated amount and regulation of time. Study 2 serves to validate the results of Study 1 and, in addition, investigates the duration of the time units students used and their relation to scholastic success. In Study 1, the participants were 332 students…
Parametric Equations, Maple, and Tubeplots.
ERIC Educational Resources Information Center
Feicht, Louis
1997-01-01
Presents an activity that establishes a graphical foundation for parametric equations by using a graphing output form called tubeplots from the computer program Maple. Provides a comprehensive review and exploration of many previously learned topics. (ASK)
Parametric-Studies and Data-Plotting Modules for the SOAP
NASA Technical Reports Server (NTRS)
2008-01-01
"Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.
Diode-pumped optical parametric oscillator
Geiger, A.R.; Hemmati, H.; Farr, W.H.
1996-02-01
Diode-pumped optical parametric oscillation has been demonstrated for the first time to our knowledge in a single Nd:MgO:LiNbO{sub 3} nonlinear crystal. The crystal is pumped by a semiconductor diode laser array at 812 nm. The Nd{sup 3+} ions absorb the 812-nm radiation to generate 1084-nm laser oscillation. On internal {ital Q} switching the 1084-nm radiation pumps the LiNbO{sub 3} host crystal that is angle cut at 46.5{degree} and generates optical parametric oscillation. The oscillation threshold that is due to the 1084-nm laser pump with a pulse length of 80 ns in a 1-mm-diameter beam was measured to be {approx_equal}1 mJ and produced 0.5-mJ output at 3400-nm signal wavelength. {copyright} {ital 1996 Optical Society of America.}
NASA Astrophysics Data System (ADS)
Das, D.; Dwivedi, A.
2013-10-01
The increasing numbers of thermal management problems in the various electronic and computing equipments, emphasize the need of effective cooling systems. Although attachment of extended surfaces (fins) is the most proposed way to enhance the heat transfer rate but sometimes addition of fins may deteriorate the heat transfer rate. So, it becomes imperative to optimize the control parameters for maximum heat transfer enhancement. Numerous experimental investigations reveal the Rayleigh number, fin height, and fin spacing are the major influencing design parameters that affect the system performance. Determination of optimum parameters depends on the proper selection of suitable design of experiments at the product development phase. This paper compares and contrasts the general full factorial design approach with Taguchi's design of experiments used for determination of optimum parametric design. These statistical approaches have been applied to the results of an experimental parametric study conducted to investigate the effect of influencing parameters on free convective heat transfer from triangular fin arrays in a horizontally oriented rectangular enclosure.
Experimental Study of Parametric Subharmonic Instability in Stratified Fluids
NASA Astrophysics Data System (ADS)
Bourget, Baptiste; Joubaud, Sylvain; Odier, Philippe; Dauxois, Thierry
2012-11-01
Internal waves are believed to be of primary importance as they affect ocean mixing and energy transport. Several processes can lead to the breaking of internal waves and they usually involve non linear interactions between waves. In this work, we study experimentally the Parametric Subharmonic Instability, which provides an efficient mechanism to transfer energy from large to smaller scales. It consists in the destabilization of a primary wave and the spontaneous emission of two secondary waves, of lower frequencies and different wave vectors. We observe that the instability displays a different behavior if the primary wave is a monochromatic vertical mode-1 or a plane wave. Moreover, using a time-frequency analysis, we are able to observe the time evolution of the secondary frequencies. Using a Hilbert transform method we measure the different wave vectors and compare with theoretical predictions. As will be shown further, this instability plays a role in the mixing processes of stratified fluids (see abstract from P. Odier).
An Application of Parametric Mixed-Integer Linear Programming to Hydropower Development
NASA Astrophysics Data System (ADS)
Turgeon, André
1987-03-01
The problem consists in selecting the sites on the river where reservoirs and hydroelectric power plants are to be built and then determining the type and size of the projected installations. The solution obviously depends on the amount of money the utility is willing to invest, which itself is a function of what the new installations will produce. It is therefore necessary to solve the problem for all possible amounts of firm energy produced, since it is not known at the outset which production level the utility will select. This is done in the paper by a parametric mixed-integer linear programming (MILP) method whose efficiency derives from the fact that the branch-and-bound algorithm for selecting the sites to be developed (and consuming most of the computer time) is solved a minimum number of times. Between the points where the MILP problem is solved, LP parametric analysis is applied.
Airy beam optical parametric oscillator
NASA Astrophysics Data System (ADS)
Aadhi, A.; Chaitanya, N. Apurv; Jabir, M. V.; Vaity, Pravin; Singh, R. P.; Samanta, G. K.
2016-05-01
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51–1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond).
Airy beam optical parametric oscillator.
Aadhi, A; Chaitanya, N Apurv; Jabir, M V; Vaity, Pravin; Singh, R P; Samanta, G K
2016-01-01
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51-1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond). PMID:27143582
Airy beam optical parametric oscillator
Aadhi, A.; Chaitanya, N. Apurv; Jabir, M. V.; Vaity, Pravin; Singh, R. P.; Samanta, G. K.
2016-01-01
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51–1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond). PMID:27143582
An averaging analysis of discrete-time indirect adaptive control
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.
Baker, C.
1994-10-01
The Department of Energy`s (DOE) Hanford site near Richland, Washington is being cleaned up after 50 years of nuclear materials production. One of the most serious problems at the site is the waste stored in single-shell underground storage tanks. There are 149 of these tanks containing the spent fuel residue remaining after the fuel is dissolved in acid and the desired materials (primarily plutonium and uranium) are separated out. The tanks are upright cylinders 75 ft. in diameter with domed tops. They are made of reinforced concrete, have steel liners, and each tank is buried under 7--12 ft. of overburden. The tanks are up to 40-ft. high, and have capacities of 500,000, 750,000, or 1,000,000 gallons of waste. As many as one-third of these tanks are known or suspected to leak. The waste form contained in the tanks varies in consistency from liquid supernatant to peanut-butter-like gels and sludges to hard salt cake (perhaps as hard as low-grade concrete). The current waste retrieval plan is to insert a large long-reach manipulator through a hole cut in the top of the tank, and use a variety of end-effectors to mobilize the waste and remove it from the tank. PNL has, with the assistance of Deneb robotics employees, developed a means of using the IGRIP code to perform parametric design of mechanical systems. This method requires no modifications to the IGRIP code, and all design data are stored in the IGRIP workcell. The method is presented in the context of development of a passive articulated mechanism that is used to deliver down-arm services to a gantry robot. The method is completely general, however, and could be used to design a fully articulated manipulator. Briefly, the method involves using IGCALC expressions to control manipulator joint angles, and IGCALC variables to allow user control of link lengths and offsets. This paper presents the method in detail, with examples drawn from PNL`s experience with the gantry robot service-providing mechanism.
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
NASA Astrophysics Data System (ADS)
Chughtai, I. R.; Iqbal, W.; Din, G. U.; Mehdi, S.; Khan, I. H.; Inayat, M. H.; Jin, J. H.
2013-05-01
A gas-liquid Taylor bubble flow occurs in small diameter channels in which gas bubbles are separated by slugs of pure liquid. This type of flow regime is well suited for solid catalyzed gas-liquid reactors in which the reaction efficiency is a strong function of axial dispersion in the regions of pure liquid. This paper presents an experimental study of liquid phase axial dispersion in a Taylor bubble flow developed in a horizontal tube using high speed photography and radiotracer residence time distribution (RTD) analysis. A parametric dependence of axial dispersion on average volume fraction of gas phase was also investigated by varying the relative volumetric flow rates of the two phases. 137mBa produced from a 137Cs/137mBa radionuclide generator was used as radiotracer and measurements were made using the NaI(Tl) scintillation detectors. Validation of 137mBa in the form of barium chloride as aqueous phase radiotracer was also carried out. Axial Dispersion Model (ADM) was used to simulate the hydrodynamics of the system and the results of the experiment are presented. It was observed that the system is characterized by very high values of Peclet Number (Pe˜102) which reveals an approaching plug type flow. The experimental and model estimated values of mean residence times were observed in agreement with each other.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
Exploring deep parametric embeddings for breast CADx
NASA Astrophysics Data System (ADS)
Jamieson, Andrew R.; Alam, Rabi; Giger, Maryellen L.
2011-03-01
Computer-aided diagnosis (CADx) involves training supervised classifiers using labeled ("truth-known") data. Often, training data consists of high-dimensional feature vectors extracted from medical images. Unfortunately, very large data sets may be required to train robust classifiers for high-dimensional inputs. To mitigate the risk of classifier over-fitting, CADx schemes may employ feature selection or dimension reduction (DR), for example, principal component analysis (PCA). Recently, a number of novel "structure-preserving" DR methods have been proposed1. Such methods are attractive for use in CADx schemes for two main reasons. First, by providing visualization of highdimensional data structure, and second, since DR can be unsupervised or semi-supervised, unlabeled ("truth-unknown") data may be incorporated2. However, the practical application of state-of-the-art DR techniques such as, t-SNE3, to breast CADx were inhibited by the inability to retain a parametric embedding function capable of mapping new input data to the reduced representation. Deep (more than one hidden layer) neural networks can be used to learn such parametric DR embeddings. We explored the feasibility of such methods for use in CADx by conducting a variety of experiments using simulated feature data, including models based on breast CADx features. Specifically, we investigated the unsupervised parametric t-SNE4 (pt-SNE), the supervised deep t-distributed MCML5 (dt-MCML), and hybrid semi-supervised modifications combining the two.
Mapping Rise Time Information with Down-Shift Analysis
Tunnell, T. W., Machorro, E. A., Diaz, A. B.
2011-11-01
These viewgraphs summarize the application of recent developments in digital down-shift (DDS) analysis of up converted PDV data to map out how well the PDV diagnostic would capture rise time information (mid point and rise time) in short rise time (<1 ns) shock events. The mapping supports a PDV vs VISAR challenge. The analysis concepts are new (~September FY 2011), simple, and run quickly, which makes them good tools to map out (with ~1 million Monte Carlo simulations) how well PDV captures rise time information as function of baseline velocity, rise time, velocity jump, and signal-to-noise ratios.
Speech processing based on short-time Fourier analysis
Portnoff, M.R.
1981-06-02
Short-time Fourier analysis (STFA) is a mathematical technique that represents nonstationary signals, such as speech, music, and seismic signals in terms of time-varying spectra. This representation provides a formalism for such intuitive notions as time-varying frequency components and pitch contours. Consequently, STFA is useful for speech analysis and speech processing. This paper shows that STFA provides a convenient technique for estimating and modifying certain perceptual parameters of speech. As an example of an application of STFA of speech, the problem of time-compression or expansion of speech, while preserving pitch and time-varying frequency content is presented.
Gleason, Scott D; Witkin, Jeffrey M
2007-01-01
The Vogel conflict test has been widely used as a methodology for detecting anxiolytic-like effects of drugs with a broad spectrum of pharmacological activities. Despite widespread acceptance of the Vogel assay as a preclinical predictor of efficacy for anxiolytic-like compounds, detailed parametrics have not been reported on the optimization of this assay to determine how the schedule of reinforcement, the rate of responding and the frequency and temporal distribution of punishing events determine drug effect. The current report documents results of a systematic study of the relationship between number of shocks delivered and efficacy of the prototypical 1,4-benzodiazepine anxiolytic chlordiazepoxide (CDAP) in rats. Under this procedure, water-deprived rats were given access to water and during the later part of this access period, contacts with the drinking tube produced a brief electric shock. CDAP (5-20 mg/kg, i.p.) was first tested under a fixed-ratio 20 response schedule (every 20 th lick produced shock delivered via the sipper tube). CDAP produced dose-dependent increases in punished licking to approximately 275% of control at 20 mg/kg. Increasing the number of shocks during the first ten responses of the punishment component decreased the number of licks made under vehicle control conditions. The frequency of shock delivery produced both quantitative and qualitative changes in the effects of chlordiazepoxide ranging from no effect to 7000% increases in responding. The effects of chlordiazepoxide were dependent both on the control rate of responding and, independently, on the frequency of shock deliveries. Parametric variation under the Vogel conflict test may be useful in comparing the efficacy of novel approaches to the treatment of anxiety disorders. PMID:17583779
Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L
2013-02-01
The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. PMID:23266912
Multicutter machining of compound parametric surfaces
NASA Astrophysics Data System (ADS)
Hatna, Abdelmadjid; Grieve, R. J.; Broomhead, P.
2000-10-01
Parametric free forms are used in industries as disparate as footwear, toys, sporting goods, ceramics, digital content creation, and conceptual design. Optimizing tool path patterns and minimizing the total machining time is a primordial issue in numerically controlled (NC) machining of free form surfaces. We demonstrate in the present work that multi-cutter machining can achieve as much as 60% reduction in total machining time for compound sculptured surfaces. The given approach is based upon the pre-processing as opposed to the usual post-processing of surfaces for the detection and removal of interference followed by precise tracking of unmachined areas.
Multifractal Analysis of Aging and Complexity in Heartbeat Time Series
NASA Astrophysics Data System (ADS)
Muñoz D., Alejandro; Almanza V., Victor H.; del Río C., José L.
2004-09-01
Recently multifractal analysis has been used intensively in the analysis of physiological time series. In this work we apply the multifractal analysis to the study of heartbeat time series from healthy young subjects and other series obtained from old healthy subjects. We show that this multifractal formalism could be a useful tool to discriminate these two kinds of series. We used the algorithm proposed by Chhabra and Jensen that provides a highly accurate, practical and efficient method for the direct computation of the singularity spectrum. Aging causes loss of multifractality in the heartbeat time series, it means that heartbeat time series of elderly persons are less complex than the time series of young persons. This analysis reveals a new level of complexity characterized by the wide range of necessary exponents to characterize the dynamics of young people.
NASA Technical Reports Server (NTRS)
Converse, G. L.
1984-01-01
A modeling technique for single stage flow modulating fans or centrifugal compressors has been developed which will enable the user to obtain consistent and rapid off-design performnce from design point input. The fan flow modulation may be obtained by either a VIGV (variable inlet guide vane) or a VPF (variable pitch rotor) option. Only the VIGV option is available for the centrifugal compressor. The modeling technique has been incorporated into a time-sharing program to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and examples cases, it is suitable as a user's manual. This report is the last of a three volume set describing the parametric representation of compressor fans, and turbines. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulating Flow Fan).
Averaging analysis for discrete time and sampled data adaptive systems
NASA Technical Reports Server (NTRS)
Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.
1986-01-01
Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.
Parametric infrared tunable laser system
NASA Technical Reports Server (NTRS)
Garbuny, M.; Henningsen, T.; Sutter, J. R.
1980-01-01
A parametric tunable infrared laser system was built to serve as transmitter for the remote detection and density measurement of pollutant, poisonous, or trace gases in the atmosphere. The system operates with a YAG:Nd laser oscillator amplifier chain which pumps a parametric tunable frequency converter. The completed system produced pulse energies of up to 30 mJ. The output is tunable from 1.5 to 3.6 micrometers at linewidths of 0.2-0.5 /cm (FWHM), although the limits of the tuning range and the narrower line crystals presently in the parametric converter by samples of the higher quality already demonstrated is expected to improve the system performance further.
Wakui, Noritaka; Takayama, Ryuji; Matsukiyo, Yasushi; Kamiyama, Naohisa; Kobayashi, Kojiro; Mukozu, Takanori; Nakano, Shigeru; Ikehara, Takashi; Nagai, Hidenari; Igarashi, Yoshinori; Sumino, Yasukiyo
2013-07-01
This case report concerns a 40-year-old male who had previously been treated for an esophageal varix rupture, at the age of 30 years. The medical examination at that time revealed occlusion of the inferior vena cava in the proximity of the liver, leading to the diagnosis of the patient with Budd-Chiari syndrome. The progress of the patient was therefore monitored in an outpatient clinic. The patient had no history of drinking or smoking, but had suffered an epileptic seizure in 2004. The patient's family history revealed nothing of note. In February 2012, color Doppler ultrasonography (US) revealed a change in the blood flow in the right portal vein branch, from hepatopetal to hepatofugal, during deep inspiration. Arrival time parametric imaging (At-PI), using Sonazoid-enhanced US, was subsequently performed to examine the deep respiration-induced changes observed in the hepatic parenchymal perfusion. US images captured during deep inspiration demonstrated hepatic parenchymal perfusion predominantly in red, indicating that the major blood supply was the hepatic artery. During deep expiration, the portal venous blood flow remained hepatopetal, and hepatic parenchymal perfusion was displayed predominantly in yellow, indicating that the portal vein was the major source of the blood flow. The original diagnostic imaging results were reproduced one month subsequently by an identical procedure. At-PI enabled an investigation into the changes that were induced in the hepatic parenchymal perfusion by a compensatory mechanism involving the hepatic artery. These changes occurred in response to a reduction in the portal venous blood flow, as is observed in the arterialization of hepatic blood flow that is correlated with the progression of chronic hepatitis C. It has been established that the peribiliary capillary plexus is important in the regulation of hepatic arterial blood flow. However, this case demonstrated that the peribiliary capillary plexus also regulates acute
Comparison between Euler and quaternion parametrization in UAV dynamics
NASA Astrophysics Data System (ADS)
Alaimo, A.; Artale, V.; Milazzo, C.; Ricciardello, A.
2013-10-01
The main topic addressed in this paper is a comparison between Euler parametrization and Quaternion one in the description of the dynamics of a Unmanned Aerial Vehicle assumed as a rigid body. In details Newton Euler equations are re-written in terms of quaternions due to the singularities that the Euler angles lead. This formulation not only avoids the gimbal lock but also allows a better performance in numerical implementation thanks to the linearity of quaternion algebra. This kind of analysis, proved by some numerical results presented, has a great importance due to the applicability of quaternion to drone control. Indeed, this latter requires a time response as quick as possible, in order to be reliable.
A statistical package for computing time and frequency domain analysis
NASA Technical Reports Server (NTRS)
Brownlow, J.
1978-01-01
The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.
Statistical Analysis of Sensor Network Time Series at Multiple Time Scales
NASA Astrophysics Data System (ADS)
Granat, R. A.; Donnellan, A.
2013-12-01
Modern sensor networks often collect data at multiple time scales in order to observe physical phenomena that occur at different scales. Whether collected by heterogeneous or homogenous sensor networks, measurements at different time scales are usually subject to different dynamics, noise characteristics, and error sources. We explore the impact of these effects on the results of statistical time series analysis methods applied to multi-scale time series data. As a case study, we analyze results from GPS time series position data collected in Japan and the Western United States, which produce raw observations at 1Hz and orbit corrected observations at time resolutions of 5 minutes, 30 minutes, and 24 hours. We utilize the GPS analysis package (GAP) software to perform three types of statistical analysis on these observations: hidden Markov modeling, probabilistic principle components analysis, and covariance distance analysis. We compare the results of these methods at the different time scales and discuss the impact on science understanding of earthquake fault systems generally and recent large seismic events specifically, including the Tohoku-Oki earthquake in Japan and El Mayor-Cucupah earthquake in Mexico.
Holter triage ambulatory ECG analysis. Accuracy and time efficiency.
Cooper, D H; Kennedy, H L; Lyyski, D S; Sprague, M K
1996-01-01
Triage ambulatory electrocardiographic (ECG) analysis permits relatively unskilled office workers to submit 24-hour ambulatory ECG Holter tapes to an automatic instrument (model 563, Del Mar Avionics, Irvine, CA) for interpretation. The instrument system "triages" what it is capable of automatically interpreting and rejects those tapes (with high ventricular arrhythmia density) requiring thorough analysis. Nevertheless, a trained cardiovascular technician ultimately edits what is accepted for analysis. This study examined the clinical validity of one manufacturer's triage instrumentation with regard to accuracy and time efficiency for interpreting ventricular arrhythmia. A database of 50 Holter tapes stratified for frequency of ventricular ectopic beats (VEBs) was examined by triage, conventional, and full-disclosure hand-count Holter analysis. Half of the tapes were found to be automatically analyzable by the triage method. Comparison of the VEB accuracy of triage versus conventional analysis using the full-disclosure hand count as the standard showed that triage analysis overall appeared as accurate as conventional Holter analysis but had limitations in detecting ventricular tachycardia (VT) runs. Overall sensitivity, positive predictive accuracy, and false positive rate for the triage ambulatory ECG analysis were 96, 99, and 0.9%, respectively, for isolated VEBs, 92, 93, and 7%, respectively, for ventricular couplets, and 48, 93, and 7%, respectively, for VT. Error in VT detection by triage analysis occurred on a single tape. Of the remaining 11 tapes containing VT runs, accuracy was significantly increased, with a sensitivity of 86%, positive predictive accuracy of 90%, and false positive rate of 10%. Stopwatch-recorded time efficiency was carefully logged during both triage and conventional ambulatory ECG analysis and divided into five time phases: secretarial, machine, analysis, editing, and total time. Triage analysis was significantly (P < .05) more time
NASA Astrophysics Data System (ADS)
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
Membrane reactor for water detritiation: a parametric study on operating parameters
Mascarade, J.; Liger, K.; Troulay, M.; Perrais, C.
2015-03-15
This paper presents the results of a parametric study done on a single stage finger-type packed-bed membrane reactor (PBMR) used for heavy water vapor de-deuteration. Parametric studies have been done on 3 operating parameters which are: the membrane temperature, the total feed flow rate and the feed composition through D{sub 2}O content variations. Thanks to mass spectrometer analysis of streams leaving the PBMR, speciation of deuterated species was achieved. Measurement of the amounts of each molecular component allowed the calculation of reaction quotient at the packed-bed outlet. While temperature variation mainly influences permeation efficiency, feed flow rate perturbation reveals dependence of conversion and permeation properties to contact time between catalyst and reacting mixture. The study shows that isotopic exchange reactions occurring on the catalyst particles surface are not thermodynamically balanced. Moreover, the variation of the heavy water content in the feed exhibits competition between permeation and conversion kinetics.
NASA Astrophysics Data System (ADS)
Curtis-Lake, E.; McLure, R. J.; Dunlop, J. S.; Rogers, A. B.; Targett, T.; Dekel, A.; Ellis, R. S.; Faber, S. M.; Ferguson, H. C.; Grogin, N. A.; Kocevski, D. D.; Koekemoer, A. M.; Lai, K.; Mármol-Queraltó, E.; Robertson, B. E.
2016-03-01
We present the results of a study investigating the sizes and morphologies of redshift 4 < z < 8 galaxies in the CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) GOODS-S (Great Observatories Origins Deep Survey southern field), HUDF (Hubble Ultra-Deep Field) and HUDF parallel fields. Based on non-parametric measurements and incorporating a careful treatment of measurement biases, we quantify the typical size of galaxies at each redshift as the peak of the lognormal size distribution, rather than the arithmetic mean size. Parametrizing the evolution of galaxy half-light radius as r50 ∝ (1 + z)n, we find n = -0.20 ± 0.26 at bright UV-luminosities (0.3L*(z = 3) < L < L*) and n = -0.47 ± 0.62 at faint luminosities (0.12L* < L < 0.3L*). Furthermore, simulations based on artificially redshifting our z ˜ 4 galaxy sample show that we cannot reject the null hypothesis of no size evolution. We show that this result is caused by a combination of the size-dependent completeness of high-redshift galaxy samples and the underestimation of the sizes of the largest galaxies at a given epoch. To explore the evolution of galaxy morphology we first compare asymmetry measurements to those from a large sample of simulated single Sérsic profiles, in order to robustly categorize galaxies as either `smooth' or `disturbed'. Comparing the disturbed fraction amongst bright (M1500 ≤ -20) galaxies at each redshift to that obtained by artificially redshifting our z ˜ 4 galaxy sample, while carefully matching the size and UV-luminosity distributions, we find no clear evidence for evolution in galaxy morphology over the redshift interval 4 < z < 8. Therefore, based on our results, a bright (M1500 ≤ -20) galaxy at z ˜ 6 is no more likely to be measured as `disturbed' than a comparable galaxy at z ˜ 4, given the current observational constraints.
Rogers, Jennifer K; Yaroshinsky, Alex; Pocock, Stuart J; Stokar, David; Pogoda, Janice
2016-06-15
This paper considers the analysis of a repeat event outcome in clinical trials of chronic diseases in the context of dependent censoring (e.g. mortality). It has particular application in the context of recurrent heart failure hospitalisations in trials of heart failure. Semi-parametric joint frailty models (JFMs) simultaneously analyse recurrent heart failure hospitalisations and time to cardiovascular death, estimating distinct hazard ratios whilst individual-specific latent variables induce associations between the two processes. A simulation study was carried out to assess the suitability of the JFM versus marginal analyses of recurrent events and cardiovascular death using standard methods. Hazard ratios were consistently overestimated when marginal models were used, whilst the JFM produced good, well-estimated results. An application to the Candesartan in Heart failure: Assessment of Reduction in Mortality and morbidity programme was considered. The JFM gave unbiased estimates of treatment effects in the presence of dependent censoring. We advocate the use of the JFM for future trials that consider recurrent events as the primary outcome. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26751714
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Straight-up time analysis. 417.215 Section 417.215 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up...
Science Journalism under Scrutiny: A Textual Analysis of "Science Times."
ERIC Educational Resources Information Center
Fursich, Elfriede; Lester, E. P.
1996-01-01
Uses a new cultural framework for the analysis of science popularization for textual analysis of the "Scientist at Work" column of the "New York Times." Shows that, although the journalists try to demystify the scientific enterprise, they juxtapose "pure" science against other dimensions of scientific work, thus reinforcing the notion of…
An Analysis of Student Satisfaction: Full-Time vs. Part-Time Students
ERIC Educational Resources Information Center
Moro-Egido, Ana I.; Panades, Judith
2010-01-01
This paper examines how full-time or part-time status affects students' level of satisfaction with their degree programs. For our analysis, we obtained data from a survey of graduate students. The survey was conducted at a public university in Spain from 2001 to 2004. The decision to undertake paid employment while studying emerges as one of the…
timeClip: pathway analysis for time course data without replicates
2014-01-01
Background Time-course gene expression experiments are useful tools for exploring biological processes. In this type of experiments, gene expression changes are monitored along time. Unfortunately, replication of time series is still costly and usually long time course do not have replicates. Many approaches have been proposed to deal with this data structure, but none of them in the field of pathway analysis. Pathway analyses have acquired great relevance for helping the interpretation of gene expression data. Several methods have been proposed to this aim: from the classical enrichment to the more complex topological analysis that gains power from the topology of the pathway. None of them were devised to identify temporal variations in time course data. Results Here we present timeClip, a topology based pathway analysis specifically tailored to long time series without replicates. timeClip combines dimension reduction techniques and graph decomposition theory to explore and identify the portion of pathways that is most time-dependent. In the first step, timeClip selects the time-dependent pathways; in the second step, the most time dependent portions of these pathways are highlighted. We used timeClip on simulated data and on a benchmark dataset regarding mouse muscle regeneration model. Our approach shows good performance on different simulated settings. On the real dataset, we identify 76 time-dependent pathways, most of which known to be involved in the regeneration process. Focusing on the 'mTOR signaling pathway' we highlight the timing of key processes of the muscle regeneration: from the early pathway activation through growth factor signals to the late burst of protein production needed for the fiber regeneration. Conclusions timeClip represents a new improvement in the field of time-dependent pathway analysis. It allows to isolate and dissect pathways characterized by time-dependent components. Furthermore, using timeClip on a mouse muscle regeneration
Topic- and Time-Oriented Visual Text Analysis.
Dou, Wenwen; Liu, Shixia
2016-01-01
To facilitate the process of converting textual data into actionable knowledge, visual text analysis has become a popular topic with active research efforts contributed by researchers worldwide. Here the authors present the benefits of combing text analysis (topic models in particular) with interactive visualization. They then highlight examples from prior work on topic- and time-oriented visual text analysis and discuss challenges that warrant additional future research. PMID:27514029
Analysis of Time-Series Quasi-Experiments. Final Report.
ERIC Educational Resources Information Center
Glass, Gene V.; Maguire, Thomas O.
The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…
Time Series Analysis Based on Running Mann Whitney Z Statistics
Technology Transfer Automated Retrieval System (TEKTRAN)
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Parenting Behaviors in Prime-Time Television: A Content Analysis.
ERIC Educational Resources Information Center
Dail, Paula W.; Way, Wendy L.
Forty-four family-oriented, prime time television program episodes (30 hours) aired in November and December 1982 were selected for content analysis from 12 commercial television series which met selection criteria for Neilsen Television rating, airing time, and theme. Family oriented programming was defined as any series with a primary theme that…
Nonlinear Analysis of Surface EMG Time Series of Back Muscles
NASA Astrophysics Data System (ADS)
Dolton, Donald C.; Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-10-01
A nonlinear analysis of surface electromyography time series of subjects with and without low back pain is presented. The mean-square displacement and entropy shows anomalous diffusive behavior on intermediate time range 10 ms < t < 1 s. This behavior implies the presence of correlations in the signal. We discuss the shape of the power spectrum of the signal.
A human factors analysis of EVA time requirements
NASA Technical Reports Server (NTRS)
Pate, D. W.
1996-01-01
Human Factors Engineering (HFE), also known as Ergonomics, is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. A human factors motion and time study was initiated with the goal of developing a database of EVA task times and a method of utilizing the database to predict how long an ExtraVehicular Activity (EVA) should take. Initial development relied on the EVA activities performed during the STS-61 mission (Hubble repair). The first step of the analysis was to become familiar with EVAs and with the previous studies and documents produced on EVAs. After reviewing these documents, an initial set of task primitives and task time modifiers was developed. Videotaped footage of STS-61 EVAs were analyzed using these primitives and task time modifiers. Data for two entire EVA missions and portions of several others, each with two EVA astronauts, was collected for analysis. Feedback from the analysis of the data will be used to further refine the primitives and task time modifiers used. Analysis of variance techniques for categorical data will be used to determine which factors may, individually or by interactions, effect the primitive times and how much of an effect they have.
Time series analysis of air pollutants in Beirut, Lebanon.
Farah, Wehbeh; Nakhlé, Myriam Mrad; Abboud, Maher; Annesi-Maesano, Isabella; Zaarour, Rita; Saliba, Nada; Germanos, Georges; Gerard, Jocelyne
2014-12-01
This study reports for the first time a time series analysis of daily urban air pollutant levels (CO, NO, NO2, O3, PM10, and SO2) in Beirut, Lebanon. The study examines data obtained between September 2005 and July 2006, and their descriptive analysis shows long-term variations of daily levels of air pollution concentrations. Strong persistence of these daily levels is identified in the time series using an autocorrelation function, except for SO2. Time series of standardized residual values (SRVs) are also calculated to compare fluctuations of the time series with different levels. Time series plots of the SRVs indicate that NO and NO2 had similar temporal fluctuations. However, NO2 and O3 had opposite temporal fluctuations, attributable to weather conditions and the accumulation of vehicular emissions. The effects of both desert dust storms and airborne particulate matter resulting from the Lebanon War in July 2006 are also discernible in the SRV plots. PMID:25150052
Parameters’ Covariance in Neutron Time of Flight Analysis – Explicit Formulae
Odyniec, M.; Blair, J.
2014-12-01
We present here a method that estimates the parameters’ variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
Kittell, David E; Mares, Jesus O; Son, Steven F
2015-04-01
Two time-frequency analysis methods based on the short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used to determine time-resolved detonation velocities with microwave interferometry (MI). The results were directly compared to well-established analysis techniques consisting of a peak-picking routine as well as a phase unwrapping method (i.e., quadrature analysis). The comparison is conducted on experimental data consisting of transient detonation phenomena observed in triaminotrinitrobenzene and ammonium nitrate-urea explosives, representing high and low quality MI signals, respectively. Time-frequency analysis proved much more capable of extracting useful and highly resolved velocity information from low quality signals than the phase unwrapping and peak-picking methods. Additionally, control of the time-frequency methods is mainly constrained to a single parameter which allows for a highly unbiased analysis method to extract velocity information. In contrast, the phase unwrapping technique introduces user based variability while the peak-picking technique does not achieve a highly resolved velocity result. Both STFT and CWT methods are proposed as improved additions to the analysis methods applied to MI detonation experiments, and may be useful in similar applications. PMID:25933878
Real-time linear predictive analysis of speech using multimicroprocessors
Seethardman, S.; Radhakrishnan, T.; Suen, C.Y.
1982-01-01
Many applications of linear predictive coding (often known as LPC) of speech signals require a system capable of performing the complete LPC analysis in real time. This paper describes a pipeline network consisting of several general purpose microprocessors, primarily suitable for complete LPC analysis of a 10-pole model with a sampling frequency of 10 khz and a frame rate of 100 hz in real time. The proposed system is different from the previous systems, which either employed special purpose hardware or produced an analysis at a lower frame rate. 27 references.
Vector processing enhancements for real-time image analysis.
Shoaf, S.; APS Engineering Support Division
2008-01-01
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
Gul, R; Bernhard, S
2015-11-01
In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. PMID:26367184
Global convergence analysis of a discrete time nonnegative ICA algorithm.
Ye, Mao
2006-01-01
When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory. PMID:16526495
Automatic Parametric Testing Of Integrated Circuits
NASA Technical Reports Server (NTRS)
Jennings, Glenn A.; Pina, Cesar A.
1989-01-01
Computer program for parametric testing saves time and effort in research and development of integrated circuits. Software system automatically assembles various types of test structures and lays them out on silicon chip, generates sequency of test instructions, and interprets test data. Employs self-programming software; needs minimum of human intervention. Adapted to needs of different laboratories and readily accommodates new test structures. Program codes designed to be adaptable to most computers and test equipment now in use. Written in high-level languages to enhance transportability.
Time Series Analysis of Insar Data: Methods and Trends
NASA Technical Reports Server (NTRS)
Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique
2015-01-01
Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.
BRST Cohomology in Beltrami Parametrization
NASA Astrophysics Data System (ADS)
Tătaru, Liviu; Vancea, Ion V.
We study the BRST cohomology within a local conformal Lagrangian field theory model built on a two-dimensional Riemann surface with no boundary. We deal with the case of the complex structure parametrized by the Beltrami differential and the scalar matter fields. The computation of all elements of the BRST cohomology is given.
Time series analysis and the analysis of aquatic and riparian ecosystems
Milhous, R.T.
2003-01-01
Time series analysis of physical instream habitat and the riparian zone is not done as frequently as would be beneficial in understanding the fisheries aspects of the aquatic ecosystem. This paper presents two case studies have how time series analysis may be accomplished. Time series analysis is the analysis of the variation of the physical habitat or the hydro-period in the riparian zone (in many situations, the floodplain).
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
NASA Technical Reports Server (NTRS)
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval
Spectral Analysis of Timing Noise in NANOGrav Pulsars
NASA Astrophysics Data System (ADS)
Perrodin, Delphine; Jenet, F. A.; Lommen, A. N.; Finn, L. S.; Demorest, P. B.
2012-01-01
The NANOGrav collaboration seeks to detect gravitational waves from distant supermassive black hole sources using a pulsar timing array. In order to search for gravitational waves, it is necessary to have a good characterization of the timing noise for each pulsar of the pulsar timing array. Red noise is common in millisecond pulsars, and we need to quantify how much red noise is present for each pulsar. This can be done by looking at the power spectra of the pulsar timing residuals. However because the pulsar data are non-uniformly sampled, one cannot simply do a Fourier analysis. Also, commonly used least-square fitting methods such as the Lomb-Scargle analysis are not adequate for steep red spectra. Instead, we compute the power spectra of NANOGrav pulsar timing residuals using the Cholesky transformation, which eliminates spectral leakage. This is done with the help of the TEMPO2 ``SpectralModel" plugin developed by William Coles and George Hobbs.
Spectral Analysis of Timing Noise in NANOGrav Pulsars
NASA Astrophysics Data System (ADS)
Perrodin, Delphine
2011-07-01
The NANOGrav collaboration seeks to detect gravitational waves from distant supermassive black hole sources using a pulsar timing array. In order to search for gravitational waves, it is necessary to have a good characterization of the timing noise for each pulsar of the pulsar timing array. Red noise is common in millisecond pulsars, and we need to quantify how much red noise is present for each pulsar. This can be done by looking at the power spectra of the pulsar timing residuals. However because the pulsar data are non-uniformly sampled, one cannot simply do a Fourier analysis. Also, commonly used least-square fitting methods such as the Lomb-Scargle analysis are not adequate for steep red spectra. Instead, we compute the power spectra of NANOGrav pulsar timing residuals using the Cholesky transformation, which eliminates spectral leakage. This is done with the help of the TEMPO2 "SpectralModel" plugin developed by William Coles and George Hobbs.
Parametric State Space Structuring
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Tilgner, Marco
1997-01-01
Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.
Testing for predator dependence in predator-prey dynamics: a non-parametric approach.
Jost, C; Ellner, S P
2000-01-01
The functional response is a key element in all predator-prey interactions. Although functional responses are traditionally modelled as being a function of prey density only, evidence is accumulating that predator density also has an important effect. However, much of the evidence comes from artificial experimental arenas under conditions not necessarily representative of the natural system, and neglecting the temporal dynamics of the organism (in particular the effects of prey depletion on the estimated functional response). Here we present a method that removes these limitations by reconstructing the functional response non-parametrically from predator-prey time-series data. This method is applied to data on a protozoan predator-prey interaction, and we obtain significant evidence of predator dependence in the functional response. A crucial element in this analysis is to include time-lags in the prey and predator reproduction rates, and we show that these delays improve the fit of the model significantly. Finally, we compare the non-parametrically reconstructed functional response to parametric forms, and suggest that a modified version of the Hassell-Varley predator interference model provides a simple and flexible function for theoretical investigation and applied modelling. PMID:11467423
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
Singular spectrum analysis and forecasting of hydrological time series
NASA Astrophysics Data System (ADS)
Marques, C. A. F.; Ferreira, J. A.; Rocha, A.; Castanheira, J. M.; Melo-Gonçalves, P.; Vaz, N.; Dias, J. M.
The singular spectrum analysis (SSA) technique is applied to some hydrological univariate time series to assess its ability to uncover important information from those series, and also its forecast skill. The SSA is carried out on annual precipitation, monthly runoff, and hourly water temperature time series. Information is obtained by extracting important components or, when possible, the whole signal from the time series. The extracted components are then subject to forecast by the SSA algorithm. It is illustrated the SSA ability to extract a slowly varying component (i.e. the trend) from the precipitation time series, the trend and oscillatory components from the runoff time series, and the whole signal from the water temperature time series. The SSA was also able to accurately forecast the extracted components of these time series.
NASA Astrophysics Data System (ADS)
Connor, J. N. L.
2013-03-01
Three new contributions to the complex angular momentum (CAM) theory of differential cross sections (DCSs) for chemical reactions are reported. They exploit recent advances in the Padé reconstruction of a scattering (S) matrix in a region surrounding the {Renolimits} J axis, where J is the total angular momentum quantum variable, starting from the discrete values, J = 0, 1, 2, …. In particular, use is made of Padé continuations obtained by Sokolovski, Castillo, and Tully [Chem. Phys. Lett. 313, 225 (1999), 10.1016/S0009-2614(99)01016-7] for the S matrix of the benchmark F + H2(vi = 0, ji = 0, mi = 0) → FH(vf = 3, jf = 3, mf = 0) + H reaction. Here vi, ji, mi and vf, jf, mf are the initial and final vibrational, rotational, and helicity quantum numbers, respectively. The three contributions are: (1) A new exact decomposition of the partial wave (PW) S matrix is introduced, which is called the QP decomposition. The P part contains information on the Regge poles. The Q part is then constructed exactly by subtracting a rapidly oscillating phase and the PW P matrix from the input PW S matrix. After a simple modification, it is found that the corresponding scattering subamplitudes provide insight into the angular-scattering dynamics using simple partial wave series (PWS) computations. It is shown that the leading n = 0 Regge pole contributes to the small-angle scattering in the centre-of-mass frame. (2) The Q matrix part of the QP decomposition has simpler properties than the input S matrix. This fact is exploited to deduce a parametrized (analytic) formula for the PW S matrix in which all terms have a direct physical interpretation. This is a long sort-after goal in reaction dynamics, and in particular for the state-to-state F + H2 reaction. (3) The first definitive test is reported for the accuracy of a uniform semiclassical (asymptotic) CAM theory for a DCS based on the Watson transformation. The parametrized S matrix obtained in contribution (2) is used in both
Analysis of Three Real-Time Dst Indices
NASA Astrophysics Data System (ADS)
Carranza-Fulmer, T. L.; Gannon, J. L.; Love, J. J.
2010-12-01
The Dst is commonly used to specify geomagnetic disturbance periods and characterize the resulting ring current enhancements from ground-based horizontal magnetic field intensity measurements. Real-time versions of the Dst index are produced for operational purposes, and are of interest to many users, including the US military, airline industry, and power companies. USGS Real time Dst, Kyoto Quicklook Dst, and Space Environment Corporation RDst use preliminary data and use a variety of contributing observatories and processing methods. Both USGS and RDst use a combined time-and-frequency domain method and Kyoto uses a time domain only method in creating the Dst index. We perform an analysis of these three real time Dst indices for the time period of October 1, 2009 to May 31, 2010. The USGS 3, using three observatories instead of the standard four, and the Kyoto Sym-H index, are introduced in the analysis for comparison of observatory location with the three main Dst indices. We present a statistical study of the differences due to algorithm, output time resolution, and location of contributing observatories. Higher time resolution shows higher frequency fluctuations during disturbances and more defined storm features. There were small differences in mid- to low-latitude observatories during quiet to moderate storm time periods. The average impact on the index due to the different algorithms used was approximately 9 nT, and greater for individual storms.
Hyper-efficient model-independent Bayesian method for the analysis of pulsar timing data
NASA Astrophysics Data System (ADS)
Lentati, Lindley; Alexander, P.; Hobson, M. P.; Taylor, S.; Gair, J.; Balan, S. T.; van Haasteren, R.
2013-05-01
A new model-independent method is presented for the analysis of pulsar timing data and the estimation of the spectral properties of an isotropic gravitational wave background (GWB). Taking a Bayesian approach, we show that by rephrasing the likelihood we are able to eliminate the most costly aspects of computation normally associated with this type of data analysis. When applied to the International Pulsar Timing Array Mock Data Challenge data sets this results in speedups of approximately 2-3 orders of magnitude compared to established methods, in the most extreme cases reducing the run time from several hours on the high performance computer “DARWIN” to less than a minute on a normal work station. Because of the versatility of this approach, we present three applications of the new likelihood. In the low signal-to-noise regime we sample directly from the power spectrum coefficients of the GWB signal realization. In the high signal-to-noise regime, where the data can support a large number of coefficients, we sample from the joint probability density of the power spectrum coefficients for the individual pulsars and the GWB signal realization using a “guided Hamiltonian sampler” to sample efficiently from this high-dimensional (˜1000) space. Critically in both these cases we need make no assumptions about the form of the power spectrum of the GWB, or the individual pulsars. Finally, we show that, if desired, a power-law model can still be fitted during sampling. We then apply this method to a more complex data set designed to represent better a future International Pulsar Timing Array or European Pulsar Timing Array data release. We show that even in challenging cases where the data features large jumps of the order 5 years, with observations spanning between 4 and 18 years for different pulsars and including steep red noise processes we are able to parametrize the underlying GWB signal correctly. Finally we present a method for characterizing the spatial
Space Shuttle Main Engine real time stability analysis
NASA Astrophysics Data System (ADS)
Kuo, F. Y.
1993-06-01
The Space Shuttle Main Engine (SSME) is a reusable, high performance, liquid rocket engine with variable thrust. The engine control system continuously monitors the engine parameters and issues propellant valve control signals in accordance with the thrust and mixture ratio commands. A real time engine simulation lab was installed at MSFC to verify flight software and to perform engine dynamic analysis. A real time engine model was developed on the AD100 computer system. This model provides sufficient fidelity on the dynamics of major engine components and yet simplified enough to be executed in real time. The hardware-in-the-loop type simulation and analysis becomes necessary as NASA is continuously improving the SSME technology, some with significant changes in the dynamics of the engine. The many issues of interfaces between new components and the engine can be better understood and be resolved prior to the firing of the engine. In this paper, the SSME real time simulation Lab at the MSFC, the SSME real time model, SSME engine and control system stability analysis, both in real time and non-real time is presented.
Time series analysis of electron flux at geostationary orbit
Szita, S.; Rodgers, D.J.; Johnstone, A.D.
1996-07-01
Time series of energetic (42.9{endash}300 keV) electron flux data from the geostationary satellite Meteosat-3 shows variability over various timescales. Of particular interest are the strong local time dependence of the flux data and the large flux peaks associated with particle injection events which occur over a timescale of a few hours. Fourier analysis has shown that for this energy range, the average electron flux diurnal variation can be approximated by a combination of two sine waves with periods of 12 and 24 hours. The data have been further examined using wavelet analysis, which shows how the diurnal variation changes and where it appears most significant. The injection events have a characteristic appearance but do not occur in phase with one another and therefore do not show up in a Fourier spectrum. Wavelet analysis has been used to look for characteristic time scales for these events. {copyright} {ital 1996 American Institute of Physics.}
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran
2013-01-01
Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143
Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran
2013-01-01
Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ΔT Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ΔT Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143
Parametric excitation in a magnetic tunnel junction-based spin torque oscillator
Dürrenfeld, P.; Iacocca, E.; Åkerman, J.; Muduli, P. K.
2014-02-03
Using microwave current injection at room temperature, we demonstrate parametric excitation of a magnetic tunnel junction (MTJ)-based spin-torque oscillator (STO). Parametric excitation is observed for currents below the auto-oscillation threshold, when the microwave current frequency f{sub e} is twice the STO free-running frequency f{sub 0}. Above threshold, the MTJ becomes parametrically synchronized. In the synchronized state, the STO exhibits an integrated power up to 5 times higher and a linewidth reduction of two orders of magnitude, compared to free-running conditions. We also show that the parametric synchronization favors single mode oscillations in the case of multimode excitation.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Structure in gamma ray burst time profiles: Statistical Analysis 1
NASA Technical Reports Server (NTRS)
Lestrade, John Patrick
1992-01-01
Since its launch on April 5, 1991, the Burst And Transient Source Experiment (BATSE) has observed and recorded over 500 gamma-ray bursts (GRB). The analysis of the time profiles of these bursts has proven to be difficult. Attempts to find periodicities through Fourier analysis have been fruitless except one celebrated case. Our goal is to be able to qualify the observed time-profiles structure. Before applying this formation to bursts, we have tested it on profiles composed of random Poissonian noise. This paper is a report of those preliminary results.
Surface parametrization and shape description
NASA Astrophysics Data System (ADS)
Brechbuehler, Christian; Gerig, Guido; Kuebler, Olaf
1992-09-01
Procedures for the parameterization and description of the surface of simply connected 3-D objects are presented. Critical issues for shape-based categorization and comparison of 3-D objects are addressed, which are generality with respect to object complexity, invariance to standard transformations, and descriptive power in terms of object geometry. Starting from segmented volume data, a relational data structure describing the adjacency of local surface elements is generated. The representation is used to parametrize the surface by defining a continuous, one-to-one mapping from the surface of the original object to the surface of a unit sphere. The mapping is constrained by two requirements, minimization of distortions and preservation of area. The former is formulated as the goal function of a nonlinear optimization problem and the latter as its constraints. Practicable starting values are obtained by an initial mapping based on a heat conduction model. In contract to earlier approaches, the novel parameterization method provides a mapping of arbitrarily shaped simply connected objects, i.e., it performs an unfolding of convoluted surface structures. This global parameterization allows the systematical scanning of the object surface by the variation of two parameters. As one possible approach to shape analysis, it enables us to expand the object surface into a series of spherical harmonic functions, extending the concept of elliptical Fourier descriptors for 2-D closed curves. The novel parameterization overcomes the traditional limitations of expressing an object surface in polar coordinates, which restricts such descriptions to star-shaped objects. The numerical coefficients in the Fourier series form an object-centered, surface-oriented descriptor of the object''s form. Rotating the coefficients in parameter space and object space puts the object into a standard position and yields a spherical harmonic descriptor which is invariant to translations, rotations
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Stability over time: Is behavior analysis a trait psychology?
Vyse, Stuart
2004-01-01
Historically, behavior analysis and trait psychology have had little in common; however, recent developments in behavior analysis bring it closer to one of the core assumptions of the trait approach: the stability of behavior over time and, to a lesser extent, environments. The introduction of the concept of behavioral momentum and, in particular, the development of molar theories have produced some common features and concerns. Behavior-analytic theories of stability provide improved explanations of many everyday phenomena and make possible the expansion of behavior analysis into areas that have been inadequately addressed. ImagesFigure 1 PMID:22478416
Sumatriptan and lost productivity time: a time series analysis of diary data.
Miller, D W; Martin, B C; Loo, C M
1996-01-01
Two previously conducted clinical studies assessed lost nonworkplace activity time and lost workplace productivity time due to migraine symptoms in subjects using sumatriptan for 6 months to treat their migraines after a 12- to 18-week period of using their usual therapy without sumatriptan. Although statistically significant differences in lost nonworkplace activity time and lost workplace productivity time between the usual therapy and sumatriptan treatment periods were detected using the Wilcoxon signed-rank test, this test could not determine whether differences were attributable to inherent trends in the data. This current study employed time series analysis, which detects and controls for preexisting trends in data, to further explore the possibility that the observed reductions in lost time in the two clinical studies were related to management of the subjects with sumatriptan. The intercepts and slopes of the computed linear models suggest that the initiation of sumatriptan therapy produced savings of 0.8 hours of nonworkplace activity time and 0.5 hours of workplace productivity time per patient per week. These savings were sustained throughout the sumatriptan treatment period. Preexisting trends in the data were not detected in the models. Thus the productivity gains are not associated with either time effects or the statistical phenomenon of regression to the mean, but variables that are extreme in initial measurements will tend to be closer to the center of the distribution in subsequent measurements. This strengthens the hypothesis that management of migraine with sumatriptan is associated with reductions in lost productivity time. PMID:9001842
State space analysis of timing: exploiting task redundancy to reduce sensitivity to timing
Cohen, Rajal G.
2012-01-01
Timing is central to many coordinated actions, and the temporal accuracy of central nervous system commands presents an important limit to skilled performance. Using target-oriented throwing in a virtual environment as an example task, this study presents a novel analysis that quantifies contributions of timing accuracy and shaping of hand trajectories to performance. Task analysis reveals that the result of a throw is fully determined by the projectile position and velocity at release; zero error can be achieved by a manifold of position and velocity combinations (solution manifold). Four predictions were tested. 1) Performers learn to release the projectile closer to the optimal moment for a given arm trajectory, achieving timing accuracy levels similar to those reported in other timing tasks (∼10 ms). 2) Performers develop a hand trajectory that follows the solution manifold such that zero error can be achieved without perfect timing. 3) Skilled performers exploit both routes to improvement more than unskilled performers. 4) Long-term improvement in skilled performance relies on continued optimization of the arm trajectory as timing limits are reached. Average and skilled subjects practiced for 6 and 15 days, respectively. In 6 days, both timing and trajectory alignment improved for all subjects, and skilled subjects showed an advantage in timing. With extended practice, performance continued to improve due to continued shaping of the trajectory, whereas timing accuracy reached an asymptote at 9 ms. We conclude that skilled subjects first maximize timing accuracy and then optimize trajectory shaping to compensate for intrinsic limitations of timing accuracy. PMID:22031769
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Ramona Stefanescu, Elena; Sandri, Laura; Patra, Abani; Marzocchi, Warner; Sulpizio, Roberto
2014-05-01
Probabilistic hazard assessments of Pyroclastic Density Currents (PDCs) are of great interest for decision-making purposes. However, there is a limited number of published works available on this topic. Recent advances in computation and statistical methods are offering new opportunities beyond the classical Monte Carlo (MC) sampling which is known as a simple and robust method but it usually turns out to be slow and computationally intractable. In this work, Titan2D numerical simulator has been coupled to Polynomial Chaos Quadrature (PCQ) to propagate the simulator parametric uncertainty and compute VEI-based probabilistic hazard maps of dense PDCs formed as a result of column collapse at Vesuvius volcano, Italy. Due to the lack of knowledge about the exact conditions under which these PDCs will form, Probability Distribution Functions (PDFs) are assigned to the simulator input parameters (Bed Friction Angle and Volume) according to three VEI sizes. Uniform distributions were used for both parameters since there is insufficient information to assume that any value in the range is more likely that any other value. Reasonable (and compatible) ranges for both variables were constrained according to past eruptions at Vesuvius volcanic system. On the basis of reasoning above a number of quadrature points were taken within those ranges, which resulted in one execution of the TITAN2D code at each quadrature point. With a computational cost several orders of magnitude smaller than MC, exceedance probabilities for a given threshold of flow depth (and conditional to the occurrence of VEI3, VEI4 and VEI5 eruptions) were calculated using PCQ. Moreover, PCQ can be run at different threshold values of the same output variable (flow depth, speed, kinetic energy, …) and, therefore, it can serve to compute Exceedance Probability curves (aka hazard curves) at singular points inside the hazard domain, representing the most important and useful scientific input to quantitative risk
Trend analysis of long-term temperature time series in the Greater Toronto Area (GTA)
NASA Astrophysics Data System (ADS)
Mohsin, Tanzina; Gough, William A.
2010-08-01
As the majority of the world’s population is living in urban environments, there is growing interest in studying local urban climates. In this paper, for the first time, the long-term trends (31-162 years) of temperature change have been analyzed for the Greater Toronto Area (GTA). Annual and seasonal time series for a number of urban, suburban, and rural weather stations are considered. Non-parametric statistical techniques such as Mann-Kendall test and Theil-Sen slope estimation are used primarily for the assessing of the significance and detection of trends, and the sequential Mann test is used to detect any abrupt climate change. Statistically significant trends for annual mean and minimum temperatures are detected for almost all stations in the GTA. Winter is found to be the most coherent season contributing substantially to the increase in annual minimum temperature. The analyses of the abrupt changes in temperature suggest that the beginning of the increasing trend in Toronto started after the 1920s and then continued to increase to the 1960s. For all stations, there is a significant increase of annual and seasonal (particularly winter) temperatures after the 1980s. In terms of the linkage between urbanization and spatiotemporal thermal patterns, significant linear trends in annual mean and minimum temperature are detected for the period of 1878-1978 for the urban station, Toronto, while for the rural counterparts, the trends are not significant. Also, for all stations in the GTA that are situated in all directions except south of Toronto, substantial temperature change is detected for the periods of 1970-2000 and 1989-2000. It is concluded that the urbanization in the GTA has significantly contributed to the increase of the annual mean temperatures during the past three decades. In addition to urbanization, the influence of local climate, topography, and larger scale warming are incorporated in the analysis of the trends.
Time-invariant measurement of time-varying bioimpedance using vector impedance analysis.
Sanchez, B; Louarroudi, E; Pintelon, R
2015-03-01
When stepped-sine impedance spectroscopy measurements are carried out on (periodically) time-varying bio-systems, the inherent time-variant (time-periodic) parts are traditionally ignored or mitigated by filtering. The latter, however, lacks theoretical foundation and, in this paper, it is shown that it only works under certain specific conditions. Besides, we propose an alternative method, based on multisine signals, that exploits the non-stationary nature in time-varying bio-systems with a dominant periodic character, such as cardiovascular and respiratory systems, or measurements interfered with by their physiological activities. The novel method extracts the best—in a mean square sense—linear time-invariant (BLTI) impedance approximation ZBLTI(jω) of a periodically time-varying (PTV) impedance ZPTV(jω, t) as well as its time-periodic part. Relying on the geometrical interpretation of the BLTI concept, a new impedance analysis tool, called vector impedance analysis (VIA), is also presented. The theoretical and practical aspects are validated through measurements performed on a PTV dummy circuit and on an in vivo myocardial tissue. PMID:25700023
Time-resolved scanning electron microscopy with polarization analysis
NASA Astrophysics Data System (ADS)
Frömter, Robert; Kloodt, Fabian; Rößler, Stefan; Frauen, Axel; Staeck, Philipp; Cavicchia, Demetrio R.; Bocklage, Lars; Röbisch, Volker; Quandt, Eckhard; Oepen, Hans Peter
2016-04-01
We demonstrate the feasibility of investigating periodically driven magnetization dynamics in a scanning electron microscope with polarization analysis based on spin-polarized low-energy electron diffraction. With the present setup, analyzing the time structure of the scattering events, we obtain a temporal resolution of 700 ps, which is demonstrated by means of imaging the field-driven 100 MHz gyration of the vortex in a soft-magnetic FeCoSiB square. Owing to the efficient intrinsic timing scheme, high-quality movies, giving two components of the magnetization simultaneously, can be recorded on the time scale of hours.
Michalopoulou, Zoi-Heleni; Pole, Andrew
2016-07-01
The dispersion pattern of a received signal is critical for understanding physical properties of the propagation medium. The objective of this work is to estimate accurately sediment sound speed using modal arrival times obtained from dispersion curves extracted via time-frequency analysis of acoustic signals. A particle filter is used that estimates probability density functions of modal frequencies arriving at specific times. Employing this information, probability density functions of arrival times for modal frequencies are constructed. Samples of arrival time differences are then obtained and are propagated backwards through an inverse acoustic model. As a result, probability density functions of sediment sound speed are estimated. Maximum a posteriori estimates indicate that inversion is successful. It is also demonstrated that multiple frequency processing offers an advantage over inversion at a single frequency, producing results with reduced variance. PMID:27475202
NASA Technical Reports Server (NTRS)
Coverse, G. L.
1984-01-01
A turbine modeling technique has been developed which will enable the user to obtain consistent and rapid off-design performance from design point input. This technique is applicable to both axial and radial flow turbine with flow sizes ranging from about one pound per second to several hundred pounds per second. The axial flow turbines may or may not include variable geometry in the first stage nozzle. A user-specified option will also permit the calculation of design point cooling flow levels and corresponding changes in efficiency for the axial flow turbines. The modeling technique has been incorporated into a time-sharing program in order to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and example cases, it is suitable as a user's manual. This report is the second of a three volume set. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulation Flow Fan).
Parametric Modeling for Fluid Systems
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Martinez, Jonathan
2013-01-01
Fluid Systems involves different projects that require parametric modeling, which is a model that maintains consistent relationships between elements as is manipulated. One of these projects is the Neo Liquid Propellant Testbed, which is part of Rocket U. As part of Rocket U (Rocket University), engineers at NASA's Kennedy Space Center in Florida have the opportunity to develop critical flight skills as they design, build and launch high-powered rockets. To build the Neo testbed; hardware from the Space Shuttle Program was repurposed. Modeling for Neo, included: fittings, valves, frames and tubing, between others. These models help in the review process, to make sure regulations are being followed. Another fluid systems project that required modeling is Plant Habitat's TCUI test project. Plant Habitat is a plan to develop a large growth chamber to learn the effects of long-duration microgravity exposure to plants in space. Work for this project included the design and modeling of a duct vent for flow test. Parametric Modeling for these projects was done using Creo Parametric 2.0.
Analysis of Complex Intervention Effects in Time-Series Experiments.
ERIC Educational Resources Information Center
Bower, Cathleen
An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…
ADAPTIVE DATA ANALYSIS OF COMPLEX FLUCTUATIONS IN PHYSIOLOGIC TIME SERIES
PENG, C.-K.; COSTA, MADALENA; GOLDBERGER, ARY L.
2009-01-01
We introduce a generic framework of dynamical complexity to understand and quantify fluctuations of physiologic time series. In particular, we discuss the importance of applying adaptive data analysis techniques, such as the empirical mode decomposition algorithm, to address the challenges of nonlinearity and nonstationarity that are typically exhibited in biological fluctuations. PMID:20041035
Real-time chemical analysis of aerosol particles
Yang, M.; Whitten, W.B.; Ramsey, J.M.
1995-04-01
An important aspect of environmental atmospheric monitoring requires the characterization of airborne microparticles and aerosols. Unfortunately, traditional sample collection and handling techniques are prone to contamination and interference effects that can render an analysis invalid. These problems can be avoided by using real-time atmospheric sampling techniques followed by immediate mass spectrometric analysis. The former is achieved in these experiments via a two state differential pumping scheme that is attached directly to a commercially available quadruple ion trap mass spectrometer. Particles produced by an external particle generator enter the apparatus and immediately pass through two cw laser/fiberoptic based detectors positioned two centimeters apart. Timing electronics measure the time between detection events, estimate the particles arrival in the center of the ion trap and control the firing of a YAG laser. Ions produced when the UV laser light ablates the particle`s surface are stored by the ion trap for mass analysis. Ion trap mass spectrometers have several advantages over conventional time-of-flight instruments. First, they are capable of MS/MS analysis by the collisional dissociation of a stored species, This permits complete chemical characterization of airborne samples. Second, ion traps are small and lend themselves to portable, field oriented applications.
Identification of human operator performance models utilizing time series analysis
NASA Technical Reports Server (NTRS)
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.