NASA Astrophysics Data System (ADS)
Frank Pai, P.
2013-04-01
This paper points out the differences between linear and nonlinear system identification tasks, shows that time-frequency analysis is most appropriate for nonlinearity identification, and presents advanced signal processing techniques that combine time-frequency decomposition and perturbation methods for parametric and non-parametric identification of nonlinear dynamical systems. Hilbert-Huang transform (HHT) is a recent data-driven adaptive time-frequency analysis technique that combines the use of empirical mode decomposition (EMD) and Hilbert transform (HT). Because EMD does not use predetermined basis functions and function orthogonality for component extraction, HHT provides more concise component decomposition and more accurate time-frequency analysis than the short-time Fourier transform and wavelet transform for extraction of system characteristics and nonlinearities. However, HHT's accuracy seriously suffers from the end effect caused by the discontinuity-induced Gibbs' phenomenon. Moreover, because HHT requires a long set of data obtained by high-frequency sampling, it is not appropriate for online frequency tracking. This paper presents a conjugate-pair decomposition (CPD) method that requires only a few recent data points sampled at a low-frequency for sliding-window point-by-point adaptive time-frequency analysis and can be used for online frequency tracking. To improve adaptive time-frequency analysis, a methodology is developed by combining EMD and CPD for noise filtering in the time domain, reducing the end effect, and dissolving other mathematical and numerical problems in time-frequency analysis. For parametric identification of a nonlinear system, the methodology processes one steady-state response and/or one free damped transient response and uses amplitude-dependent dynamic characteristics derived from perturbation analysis to determine the type and order of nonlinearity and system parameters. For non-parametric identification, the methodology
Parametric time-series analysis of daily air pollutants of city of Shumen, Bulgaria
NASA Astrophysics Data System (ADS)
Ivanov, A.; Voynikova, D.; Gocheva-Ilieva, S.; Boyadzhiev, D.
2012-10-01
The urban air pollution is one of the main factors determining the ambient air quality, which affects on the human health and the environment. In this paper parametric time series models are obtained for studying the distribution over time of primary pollutants as sulphur and nitrogen oxides, particulate matter and a secondary pollutant ground level ozon in the town of Shumen, Bulgaria. The methods of factor analysis and ARIMA are used to carry out the time series analysis based on hourly average data in 2011 and first quarter of 2012. The constructed models are applied for a short-term air pollution forecasting. The results are estimated on the basis of national and European regulation indices. The sources of pollutants in the region and their harmful effects on human health are also discussed.
NASA Astrophysics Data System (ADS)
Jiang, Shifeng; Treps, Nicolas; Fabre, Claude
2012-04-01
We present in this paper a general model for determining the quantum properties of the light generated by a synchronously pumped optical parametric oscillator (SPOPO) operating below threshold. This model considers time and frequency on an equal footing, which allows us to find new quantum properties, related for example to the carrier envelope offset (CEO) phase, and to consider situations that are close to real experiments. We show that, in addition to multimode squeezing in the so-called ‘supermodes’, the system exhibits quadrature entanglement between frequency combs of opposite CEO phases. We have also determined the quantum properties of the individual pulses and their quantum correlations with the neighboring pulses. Finally, we determine the quantum Cramer-Rao limit for an ultra-short time delay measurement using a given number of pulses generated by the SPOPO.
Luo, Sheng
2013-01-01
Impairment caused by Parkinson’s disease (PD) is multidimensional (e.g., sensoria, functions, and cognition) and progressive. Its multidimensional nature precludes a single outcome to measure disease progression. Clinical trials of PD use multiple categorical and continuous longitudinal outcomes to assess the treatment effects on overall improvement. A terminal event such as death or dropout can stop the follow-up process. Moreover, the time to the terminal event may be dependent on the multivariate longitudinal measurements. In this article, we consider a joint random-effects model for the correlated outcomes. A multilevel item response theory model is used for the multivariate longitudinal outcomes and a parametric accelerated failure time model is used for the failure time because of the violation of proportional hazard assumption. These two models are linked via random effects. The Bayesian inference via MCMC is implemented in ‘ BUGS’ language. Our proposed method is evaluated by a simulation study and is applied to DATATOP study, a motivating clinical trial to determine if deprenyl slows the progression of PD. PMID:24009073
Luo, Sheng
2014-02-20
Impairment caused by Parkinson's disease (PD) is multidimensional (e.g., sensoria, functions, and cognition) and progressive. Its multidimensional nature precludes a single outcome to measure disease progression. Clinical trials of PD use multiple categorical and continuous longitudinal outcomes to assess the treatment effects on overall improvement. A terminal event such as death or dropout can stop the follow-up process. Moreover, the time to the terminal event may be dependent on the multivariate longitudinal measurements. In this article, we consider a joint random-effects model for the correlated outcomes. A multilevel item response theory model is used for the multivariate longitudinal outcomes and a parametric accelerated failure time model is used for the failure time because of the violation of proportional hazard assumption. These two models are linked via random effects. The Bayesian inference via MCMC is implemented in 'BUGS' language. Our proposed method is evaluated by a simulation study and is applied to DATATOP study, a motivating clinical trial to determine if deprenyl slows the progression of PD. PMID:24009073
NASA Astrophysics Data System (ADS)
Choi, Jongseong
The performance of a hypersonic flight vehicle will depend on existing materials and fuels; this work presents the performance of the ideal scramjet engine for three different combustion chamber materials and three different candidate fuels. Engine performance is explored by parametric cycle analysis for the ideal scramjet as a function of material maximum service temperature and the lower heating value of jet engine fuels. The thermodynamic analysis is based on the Brayton cycle as similarly employed in describing the performance of the ramjet, turbojet, and fanjet ideal engines. The objective of this work is to explore material operating temperatures and fuel possibilities for the combustion chamber of a scramjet propulsion system to show how they relate to scramjet performance and the seven scramjet engine parameters: specific thrust, fuel-to-air ratio, thrust-specific fuel consumption, thermal efficiency, propulsive efficiency, overall efficiency, and thrust flux. The information presented in this work has not been done by others in the scientific literature. This work yields simple algebraic equations for scramjet performance which are similar to that of the ideal ramjet, ideal turbojet and ideal turbofan engines.
Parametric analysis of ATT configurations.
NASA Technical Reports Server (NTRS)
Lange, R. H.
1972-01-01
This paper describes the results of a Lockheed parametric analysis of the performance, environmental factors, and economics of an advanced commercial transport envisioned for operation in the post-1985 time period. The design parameters investigated include cruise speeds from Mach 0.85 to Mach 1.0, passenger capacities from 200 to 500, ranges of 2800 to 5500 nautical miles, and noise level criteria. NASA high performance configurations and alternate configurations are operated over domestic and international route structures. Indirect and direct costs and return on investment are determined for approximately 40 candidate aircraft configurations. The candidate configurations are input to an aircraft sizing and performance program which includes a subroutine for noise criteria. Comparisons are made between preferred configurations on the basis of maximum return on investment as a function of payload, range, and design cruise speed.
Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allan
2003-01-01
Because twisted coronal features are important proxies for predicting solar eruptive events, and, yet not clearly understood, we present new results to resolve the complex, non-potential magnetic field configurations of active regions. This research uses free-form deformation mathematics to generate the associated coronal magnetic field. We use a parametric representation of the magnetic field lines such that the field lines can be manipulated to match the structure of EUV and SXR coronal loops. The objective is to derive sigmoidal magnetic field solutions which allows the beta greater than 1 regions to be included, aligned and non-aligned electric currents to be calculated, and the Lorentz force to be determined. The advantage of our technique is that the solution is independent of the unknown upper and side boundary conditions, allows non-vanishing magnetic forces, and provides a global magnetic field solution, which contains high- and low-beta regimes and is consistent with all the coronal images of the region. We show that the mathematical description is unique and physical.
Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.
2012-01-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
NASA Astrophysics Data System (ADS)
Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Borucki, William J.; Bryson, Stephen T.; Buchhave, Lars A.; Burke, Christopher J.; Caldwell, Douglas A.; Charbonneau, David; Clarke, Bruce D.; Cochran, William D.; Désert, Jean-Michel; Endl, Michael; Everett, Mark E.; Fischer, Debra A.; Gautier, Thomas N., III; Gilliland, Ron L.; Jenkins, Jon M.; Haas, Michael R.; Horch, Elliott; Howell, Steve B.; Ibrahim, Khadeejah A.; Isaacson, Howard; Koch, David G.; Latham, David W.; Li, Jie; Lucas, Philip; MacQueen, Phillip J.; Marcy, Geoffrey W.; McCauliff, Sean; Mullally, Fergal R.; Quinn, Samuel N.; Quintana, Elisa; Shporer, Avi; Still, Martin; Tenenbaum, Peter; Thompson, Susan E.; Torres, Guillermo; Twicken, Joseph D.; Wohler, Bill; Kepler Science Team
2012-05-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Buchhave, Lars A.; Collaboration: Kepler Science Team; and others
2012-05-10
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Parametric instabilities in picosecond time scales
Baldis, H.A.; Rozmus, W.; Labaune, C.; Mounaix, Ph.; Pesme, D.; Baton, S.; Tikhonchuk, V.T.
1993-03-01
The coupling of intense laser light with plasmas is a rich field of plasma physics, with many applications. Among these are inertial confinement fusion (ICF), x-ray lasers, particle acceleration, and x-ray sources. Parametric instabilities have been studied for many years because of their importance to ICF; with laser pulses with duration of approximately a nanosecond, and laser intensities in the range 10{sup 14}--10{sup 15}W/cm{sup 2} these instabilities are of crucial concern because of a number of detrimental effects. Although the laser pulse duration of interest for these studies are relatively long, it has been evident in the past years that to reach an understanding of these instabilities requires their characterization and analysis in picosecond time scales. At the laser intensities of interest, the growth rate for stimulated Brillouin scattering (SBS) is of the order of picoseconds, and of an order of magnitude shorter for stimulated Raman scattering (SRS). In this paper the authors discuss SBS and SRS in the context of their evolution in picosecond time scales. They describe the fundamental concepts associated with their growth and saturation, and recent work on the nonlinear treatment required for the modeling of these instabilities at high laser intensities.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; Ostien, Jakob T.; Lai, Zhengshou
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
Analysis of parametric transformer with rectifier load
Ichinokura, O.; Jinzenji, T. ); Tajima, K. )
1993-03-01
This paper describes a push-pull parametric transformer constructed using a pair of orthogonal-cores. The operating characteristics of the parametric transformer with a rectifier load were analyzed based on SPICE simulations. The analysis results show good agreement with experiment. It was found that the input surge current of the full-wave rectifier circuit with a smoothing capacitor can be compensated by the parametric transformer. Use of the parametric transformer as a power stabilizer is anticipated owing to its various functions such as for voltage regulation and overload protection.
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-09-01
The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.
Adaptive time-frequency parametrization of epileptic spikes
NASA Astrophysics Data System (ADS)
Durka, Piotr J.
2004-05-01
Adaptive time-frequency approximations of signals have proven to be a valuable tool in electroencephalogram (EEG) analysis and research, where it is believed that oscillatory phenomena play a crucial role in the brain’s information processing. This paper extends this paradigm to the nonoscillating structures such as the epileptic EEG spikes, and presents the advantages of their parametrization in general terms such as amplitude and half-width. A simple detector of epileptic spikes in the space of these parameters, tested on a limited data set, gives very promising results. It also provides a direct distinction between randomly occurring spikes or spike/wave complexes and rhythmic discharges.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
Parametric systems analysis for tandem mirror hybrids
Lee, J.D.; Chapin, D.L.; Chi, J.W.H.
1980-09-01
Fusion fission systems, consisting of fissile producing fusion hybrids combining a tandem mirror fusion driver with various blanket types and net fissile consuming LWR's, have been modeled and analyzed parametrically. Analysis to date indicates that hybrids can be competitive with mined uranium when U/sub 3/O/sub 8/ cost is about 100 $/lb., adding less than 25% to present day cost of power from LWR's. Of the three blanket types considered, uranium fast fission (UFF), thorium fast fission (ThFF), and thorium fission supressed (ThFS), the ThFS blanket has a modest economic advantage under most conditions but has higher support ratios and potential safety advantages under all conditions.
Meta-Analysis of Candidate Gene Effects Using Bayesian Parametric and Non-Parametric Approaches
Wu, Xiao-Lin; Gianola, Daniel; Rosa, Guilherme J. M.; Weigel, Kent A.
2014-01-01
Candidate gene (CG) approaches provide a strategy for identification and characterization of major genes underlying complex phenotypes such as production traits and susceptibility to diseases, but the conclusions tend to be inconsistent across individual studies. Meta-analysis approaches can deal with these situations, e.g., by pooling effect-size estimates or combining P values from multiple studies. In this paper, we evaluated the performance of two types of statistical models, parametric and non-parametric, for meta-analysis of CG effects using simulated data. Both models estimated a “central” effect size while taking into account heterogeneity over individual studies. The empirical distribution of study-specific CG effects was multi-modal. The parametric model assumed a normal distribution for the study-specific CG effects whereas the non-parametric model relaxed this assumption by posing a more general distribution with a Dirichlet process prior (DPP). Results indicated that the meta-analysis approaches could reduce false positive or false negative rates by pooling strengths from multiple studies, as compared to individual studies. In addition, the non-parametric, DPP model captured the variation of the “data” better than its parametric counterpart. PMID:25057320
A review of parametric modelling techniques for EEG analysis.
Pardey, J; Roberts, S; Tarassenko, L
1996-01-01
This review provides an introduction to the use of parametric modelling techniques for time series analysis, and in particular the application of autoregressive modelling to the analysis of physiological signals such as the human electroencephalogram. The concept of signal stationarity is considered and, in the light of this, both adaptive models, and non-adaptive models employing fixed or adaptive segmentation, are discussed. For non-adaptive autoregressive models, the Yule-Walker equations are derived and the popular Levinson-Durbin and Burg algorithms are introduced. The interpretation of an autoregressive model as a recursive digital filter and its use in spectral estimation are considered, and the important issues of model stability and model complexity are discussed.
Lottery spending: a non-parametric analysis.
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.
Lottery Spending: A Non-Parametric Analysis
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699
Soil Analysis using the semi-parametric NAA technique
Zamboni, C. B.; Silveira, M. A. G.; Medina, N. H.
2007-10-26
The semi-parametric Neutron Activation Analysis technique, using Au as a flux monitor, was applied to measure element concentrations of Br, Ca, Cl, K, Mn and Na for soil characterization. The results were compared with those using the Instrumental Neutron Activation Analysis technique and they found to be compatible. The viability, advantages, and limitations of using these two analytic methodologies are discussed.
Parametric analysis of a magnetized cylindrical plasma
Ahedo, Eduardo
2009-11-15
The relevant macroscopic model, the spatial structure, and the parametric regimes of a low-pressure plasma confined by a cylinder and an axial magnetic field is discussed for the small-Debye length limit, making use of asymptotic techniques. The plasma response is fully characterized by three-dimensionless parameters, related to the electron gyroradius, and the electron and ion collision mean-free-paths. There are the unmagnetized regime, the main magnetized regime, and, for a low electron-collisionality plasma, an intermediate-magnetization regime. In the magnetized regimes, electron azimuthal inertia is shown to be a dominant phenomenon in part of the quasineutral plasma region and to set up before ion radial inertia. In the main magnetized regime, the plasma structure consists of a bulk diffusive region, a thin layer governed by electron inertia, a thinner sublayer controlled by ion inertia, and the non-neutral Debye sheath. The solution of the main inertial layer yields that the electron azimuthal energy near the wall is larger than the electron thermal energy, making electron resistivity effects non-negligible. The electron Boltzmann relation is satisfied only in the very vicinity of the Debye sheath edge. Ion collisionality effects are irrelevant in the magnetized regime. Simple scaling laws for plasma production and particle and energy fluxes to the wall are derived.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Fu, Chuanyun; Zhang, Yaping; Bie, Yiming; Hu, Liwei
2016-10-01
Countdown timers display the time left on the current signal, which makes drivers be more ready to react to the phase change. However, previous related studies have rarely explored the effects of countdown timer on driver's brake perception-reaction time (BPRT) to yellow light. The goal of this study was therefore to characterize and model driver's BPRT to yellow signal at signalized intersections with and without countdown timer. BPRT data for "first-to-stop" vehicles after yellow onset within the transitional zone were collected through on-site observation at six signalized intersections in Harbin, China. Statistical analysis showed that the observed 15th, 50th, and 85th percentile BPRTs without countdown timer were 0.52, 0.84, and 1.26s, respectively. The observed 15th, 50th, and 85th percentile BPRTs with countdown timer were 0.32, 1.20, and 2.52s, respectively. Log-logistic distribution appeared to best fit the BPRT without countdown timer, while Weibull distribution seemed to best fit the BPRT with countdown timer. After that, a Log-logistic accelerated failure time (AFT) duration model was developed to model driver's BPRT without countdown timer, whereas a Weibull AFT duration model was established to model driver's BPRT with countdown timer. Three significant factors affecting the BPRT identified in both AFT models included yellow-onset distance from the stop line, yellow-onset approach speed, and deceleration rate. No matter whether the presence of countdown timer or not, BPRT increased as yellow-onset distance to the stop line or deceleration rate increased, but decreased as yellow-onset speed increased. The impairment of driver's BPRT due to countdown timer appeared to increase with yellow-onset distance to the stop line or deceleration rate, but decrease with yellow-onset speed. An increase in driver's BPRT because of countdown timer may induce risky driving behaviors (i.e., stop abruptly, or even violate traffic signal), revealing a weakness of
Parametric identification of a servo-hydraulic actuator for real-time hybrid simulation
NASA Astrophysics Data System (ADS)
Qian, Yili; Ou, Ge; Maghareh, Amin; Dyke, Shirley J.
2014-10-01
In a typical Real-time Hybrid Simulation (RTHS) setup, servo-hydraulic actuators serve as interfaces between the computational and physical substructures. Time delay introduced by actuator dynamics and complex interaction between the actuators and the specimen has detrimental effects on the stability and accuracy of RTHS. Therefore, a good understanding of servo-hydraulic actuator dynamics is a prerequisite for controller design and computational simulation of RTHS. This paper presents an easy-to-use parametric identification procedure for RTHS users to obtain re-useable actuator parameters for a range of payloads. The critical parameters in a linearized servo-hydraulic actuator model are optimally obtained from genetic algorithms (GA) based on experimental data collected from various specimen mass/stiffness combinations loaded to the target actuator. The actuator parameters demonstrate convincing convergence trend in GA. A key feature of this parametric modeling procedure is its re-usability under different testing scenarios, including different specimen mechanical properties and actuator inner-loop control gains. The models match well with experimental results. The benefit of the proposed parametric identification procedure has been demonstrated by (1) designing an H∞ controller with the identified system parameters that significantly improves RTHS performance; and (2) establishing an analysis and computational simulation of a servo-hydraulic system that help researchers interpret system instability and improve design of experiments.
Enhanced optical squeezing from a degenerate parametric amplifier via time-delayed coherent feedback
NASA Astrophysics Data System (ADS)
Német, Nikolett; Parkins, Scott
2016-08-01
A particularly simple setup is introduced to study the influence of time-delayed coherent feedback on the optical squeezing properties of the degenerate parametric amplifier. The possibility for significantly enhanced squeezing is demonstrated both on resonance and in sidebands, at a reduced pump power compared to the case without feedback. We study a broad range of operating parameters and their influence on the characteristic squeezing of the system. A classical analysis of the system dynamics reveals the connection between the feedback-modified landscape of stability and enhanced squeezing.
Two parametric voice source models and their asymptotic analysis
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2014-05-01
The paper studies the asymptotic behavior of the function for the area of the glottis near moments of its opening and closing for two mathematical voice source models. It is shown that in the first model, the asymptotics of the area function obeys a power law with an exponent of no less that 1. Detailed analysis makes it possible to refine these limits depending on the relative sizes of the intervals of a closed and open glottis. This work also studies another parametric model of the area of the glottis, which is based on a simplified physical-geometrical representation of vocal-fold vibration processes. This is a special variant of the well-known two-mass model and contains five parameters: the period of the main tone, equivalent masses on the lower and upper edge of vocal folds, the coefficient of elastic resistance of the lower vocal fold, and the delay time between openings of the upper and lower folds. It is established that the asymptotics of the obtained function for the area of the glottis obey a power law with an exponent of 1 both for opening and closing.
Parametric analysis of transient skin heating induced by terahertz radiation.
Zilberti, Luca; Arduino, Alessandro; Bottauscio, Oriano; Chiampi, Mario
2014-07-01
This paper investigates the effect of relevant physical parameters on transient temperature elevation induced in human tissues by electromagnetic waves in the terahertz (THz) band. The problem is defined by assuming a plane wave, which, during a limited time interval, normally impinges on the surface of a 3-layer model of the human body, causing a thermal transient. The electromagnetic equations are solved analytically, while the thermal ones are handled according to the finite element method. A parametric analysis is performed with the aim of identifying the contribution of each parameter, showing that the properties of the first skin layer (except blood flow) play a major role in the computation of the maximum temperature rise for the considered exposure situation. Final results, obtained by combining all relevant parameters together, show that the deviation from the reference solution of the maximum temperature elevation in skin is included in the coverage intervals from -30% to +10% at 0.1 THz and from -33% to +18% at 1 THz (with 95% confidence level). These data allow bounding the possible temperature increase against the spread of tissue properties that could be reasonably used for dosimetric simulations. PMID:24510310
Parametric-time coherent states for the generalized MIC-Kepler system
Uenal, Nuri
2006-12-15
In this study, we construct the parametric-time coherent states for the negative energy states of the generalized MIC-Kepler system, in which a charged particle is in a monopole vector potential, a Coulomb potential, and a Bohm-Aharonov potantial. We transform the system into four isotropic harmonic oscillators and construct the parametric-time coherent states for these oscillators. Finally, we compactify these states into the physical time coherent states for the generalized MIC-Kepler system.
160-Gb/s optical time division multiplexing and multicasting in parametric amplifiers.
Brès, Camille-Sophie; Wiberg, Andreas O J; Coles, James; Radic, Stojan
2008-10-13
We report the generation of an optical time division multiplexed single data channel at 160 Gb/s using a one-pump fiber-optic parametric amplifier, and its subsequent multicasting. A two-pump fiber optic parametric amplifier was used to perform all-optical multicasting of 160 Gb/s channel to four data streams. New processing scheme combined the increase in signal extinction ratio and low-impairment multicasting using continuous-wave parametric pumps. Selective conjugation of 160 Gb/s was demonstrated for the first time.
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
Deriving the Coronal Magnetic Field Using Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Rose, M. Franklin (Technical Monitor)
2001-01-01
When plasma-beta greater than 1 then the gas pressure dominates over the magnetic pressure. This ratio as a function along the coronal magnetic field lines varies from beta greater than 1 in the photosphere at the base of the field lines, to beta much less than 1 in the mid-corona, to beta greater than 1 in the upper corona. Almost all magnetic field extrapolations do not or cannot take into account the full range of beta. They essentially assume beta much less than 1, since the full boundary conditions do not exist in the beta greater than 1 regions. We use a basic parametric representation of the magnetic field lines such that the field lines can be manipulated to match linear features in the EUV and SXR coronal images in a least squares sense. This research employs free-form deformation mathematics to generate the associated coronal magnetic field. In our research program, the complex magnetic field topology uses Parametric Transformation Analysis (PTA) which is a new and innovative method to describe the coronal fields that we are developing. In this technique the field lines can be viewed as being embedded in a plastic medium, the frozen-in-field-line concept. As the medium is deformed the field lines are similarly deformed. However the advantage of the PTA method is that the field line movement represents a transformation of one magnetic field solution into another magnetic field solution. When fully implemented, this method will allow the resulting magnetic field solution to fully match the magnetic field lines with EUV/SXR coronal loops by minimizing the differences in direction and dispersion of a collection of PTA magnetic field lines and observed field lines. The derived magnetic field will then allow beta greater than 1 regions to be included, the electric currents to be calculated, and the Lorentz force to be determined. The advantage of this technique is that the solution is: (1) independent of the upper and side boundary conditions, (2) allows non
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
Parametric and nonparametric linkage analysis: A unified multipoint approach
Kruglyak, L.; Daly, M.J.; Reeve-Daly, M.P.; Lander, E.S.
1996-06-01
In complex disease studies, it is crucial to perform multipoint linkage analysis with many markers and to use robust nonparametric methods that take account of all pedigree information. Currently available methods fall short in both regards. In this paper, we describe how to extract complete multipoint inheritance information from general pedigrees of moderate size. This information is captured in the multipoint inheritance distribution, which provides a framework for a unified approach to both parametric and nonparametric methods of linkage analysis. Specifically, the approach includes the following: (1) Rapid exact computation of multipoint LOD scores involving dozens of highly polymorphic markers, even in the presence of loops and missing data. (2) Nonparametric linkage (NPL) analysis, a powerful new approach to pedigree analysis. We show that NPL is robust to uncertainty about mode of inheritance, is much more powerful than commonly used nonparametric methods, and loses little power relative to parametric linkage analysis. NPL thus appears to be the method of choice for pedigree studies of complex traits. (3) Information-content mapping, which measures the fraction of the total inheritance information extracted by the available marker data and points out the regions in which typing additional markers is most useful. (4) Maximum-likelihood reconstruction of many-marker haplotypes, even in pedigrees with missing data. We have implemented NPL analysis, LOD-score computation, information-content mapping, and haplotype reconstruction in a new computer package, GENEHUNTER. The package allows efficient multipoint analysis of pedigree data to be performed rapidly in a single user-friendly environment. 34 refs., 9 figs., 2 tabs.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Desiccant Enhanced Evaporative Air Conditioning: Parametric Analysis and Design; Preprint
Woods, J.; Kozubal, E.
2012-10-01
This paper presents a parametric analysis using a numerical model of a new concept in desiccant and evaporative air conditioning. The concept consists of two stages: a liquid desiccant dehumidifier and a dew-point evaporative cooler. Each stage consists of stacked air channel pairs separated by a plastic sheet. In the first stage, a liquid desiccant film removes moisture from the process (supply-side) air through a membrane. An evaporatively-cooled exhaust airstream on the other side of the plastic sheet cools the desiccant. The second-stage indirect evaporative cooler sensibly cools the dried process air. We analyze the tradeoff between device size and energy efficiency. This tradeoff depends strongly on process air channel thicknesses, the ratio of first-stage to second-stage area, and the second-stage exhaust air flow rate. A sensitivity analysis reiterates the importance of the process air boundary layers and suggests a need for increasing airside heat and mass transfer enhancements.
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Quantum analysis of optical parametric fluorescence in the optical parametric amplification process
NASA Astrophysics Data System (ADS)
Wang, Bopeng; Zou, Xubo; Jing, Feng
2015-07-01
The temporal contrast of the ultra-intense laser pulse can be degraded by optical parametric fluorescence (OPF) in high-gain and pulse-pumped optical parametric amplification (OPA). However, to the best of our knowledge, no theory is proposed to describe the quantum noise in such conditions from the first principle. In this paper, we propose a theory based on the quantization of energy flux and a linearization method for investigating both lasers and the OPF in high-gain and pulse-pumped OPA. Following the proposal, the amplification of laser is consistent with classical nonlinear optics. Moreover, an analytical formula of OPF energy is obtained under undepleted and quasi-monochromatic pump conditions. A simplified formula is also obtained under the assumption of small spectral width and acceptant solid angle. Furthermore, a formula of the OPF duration is obtained with a Gaussian pump pulse. Excellent consistency is achieved between previous experiments and our theory.
Differential Mitochondrial Toxicity Screening and Multi-Parametric Data Analysis
Tsiper, Maria V.; Sturgis, Jennifer; Avramova, Larisa V.; Parakh, Shilpa; Fatig, Raymond; Juan-García, Ana; Li, Nianyu; Rajwa, Bartek; Narayanan, Padma; Qualls, C. W.; Robinson, J. Paul; Davisson, V. Jo
2012-01-01
Early evaluation of new drug entities for their potential to cause mitochondrial dysfunction is becoming an important task for drug development. Multi-parametric high-content screening (mp-HCS) of mitochondrial toxicity holds promise as a lead in-vitro strategy for drug testing and safety evaluations. In this study, we have developed a mp-HCS and multi-parametric data analysis scheme for assessing cell responses to induced mitochondrial perturbation. The mp-HCS measurements are shown to be robust enough to allow for quantitative comparison of biological systems with different metabolic pathways simulated by alteration of growth media. Substitution of medium glucose for galactose sensitized cells to drug action and revealed novel response parameters. Each compound was quantitatively characterized according to induced phenotypic changes of cell morphology and functionality measured by fluorescent biomarkers for mitochondrial activity, plasma membrane permeability, and nuclear morphology. Descriptors of drug effects were established by generation of a SCRIT (Specialized-Cell-Response-to-Induced-Toxicity) vector, consisting of normalized statistical measures of each parameter at each dose and growth condition. The dimensionality of SCRIT vectors depends on the number of parameters chosen, which in turn depends on the hypothesis being tested. Specifically, incorporation of three parameters of response into SCRIT vectors enabled clustering of 84 training compounds with known pharmacological and toxicological activities according to the degree of toxicity and mitochondrial involvement. Inclusion of 6 parameters enabled the resolution of more subtle differences between compounds within a common therapeutic class; scoring enabled a ranking of statins in direct agreement with clinical outcomes. Comparison of drug-induced changes required variations in glucose for separation of mitochondrial dysfunction from other types of cytotoxicity. These results also demonstrate that the
Syndrome Surveillance Using Parametric Space-Time Clustering
KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.
2002-11-01
As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.
2014-01-01
Background Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called “uncorrelated relaxed clock” where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. Results We develop a maximum likelihood method – Physher – that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. Conclusions These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/. PMID:25055743
Interactive flutter analysis and parametric study for conceptual wing design
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1995-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed on MathCad (trademark) platform for Macintosh, with integrated documentation, graphics, database and symbolic mathematics. The analysis method was based on nondimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The plots were compiled in a Vaught Corporation report from a vast database of past experiments and wind tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended Wing Body concept, proposed by McDonnell Douglas Corporation. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
NASA Astrophysics Data System (ADS)
Wei, Sha; Han, Qinkai; Peng, Zhike; Chu, Fulei
2016-05-01
Some system parameters in mechanical systems are always uncertain due to uncertainties in geometric and material properties, lubrication condition and wear. For a more reasonable estimation of dynamic analysis of the parametrically excited system, the effect of uncertain parameters should be taken into account. This paper presents a new non-probabilistic analysis method for solving the dynamic responses of parametrically excited systems under uncertainties and multi-frequency excitations. By using the multi-dimensional harmonic balance method (MHBM) and the Chebyshev inclusion function (CIF), an interval multi-dimensional harmonic balance method (IMHBM) is obtained. To illustrate the accuracy of the proposed method, a time-varying geared system of wind turbine with different kinds of uncertainties is demonstrated. By comparing with the results of the scanning method, it is shown that the presented method is valid and effective for the parametrically excited system with uncertainties and multi-frequency excitations. The effects of some uncertain system parameters including uncertain mesh stiffnesses and uncertain bearing stiffnesses on the frequency responses of the system are also discussed in detail. It is shown that the dynamic responses of the system are insensitive to the uncertain mesh stiffness and bearing stiffnesses of the planetary gear stage. The uncertain bearing stiffnesses of the intermediate and high-speed stages will lead to relatively large uncertainties in the dynamic responses around resonant regions. It will provide valuable guidance for the optimal design and condition monitoring of wind turbine gearboxes.
NASA Astrophysics Data System (ADS)
Andronov, I. L.; Chinarova, L. L.
Numerical comparison of the methods for periodogram analysis is carried out for the parametric modifications of the Fourier transform by Deeming T.J. (1975, Ap. Space Sci., 36, 137); Lomb N.R. (1976, Ap. Space Sci., 39, 447); Andronov I.L. (1994, Odessa Astron. Publ., 7, 49); parametric modifications based on the spline approximations of different order k and defect k by Jurkevich I. (1971, Ap. Space Sci., 13, 154; n = 0, k = 1); Marraco H.G., Muzzio J.C. (1980, P.A.S.P., 92, 700; n = 1, k = 2); Andronov I.L. (1987, Contrib. Astron. Inst. Czechoslovak. 20, 161; n = 3, k = 1); non-parametric modifications by Lafler J. and Kinman T.D. (1965, Ap.J.Suppl., 11, 216), Burke E.W., Rolland W.W. and Boy W.R. (1970, J.R.A.S.Canada, 64, 353), Deeming T.J. (1970, M.N.R.A.S., 147, 365), Renson P. (1978, As. Ap., 63, 125) and Dworetsky M.M. (1983, M.N.R.A.S., 203, 917). For some numerical models the values of the mean, variance, asymmetry and excess of the test-functions are determined, the correlations between them are discussed. Analytic estimates of the mathematical expectation of the test function for different methods and of the dispersion of the test function by Lafler and Kinman (1965) and of the parametric functions are determined. The statistical distribution of the test functions computed for fixed data and various frequencies is significantly different from that computed for various data realizations. The histogram for the non-parametric test functions is nearly symmetric for normally distributed uncorrelated data and is characterized by a distinctly negative asymmetry for noisy data with periodic components. The non-parametric test-functions may be subdivided into two groups - similar to that by Lafler and Kinman (1965) and to that by Deeming (1970). The correlation coefficients for the test-functions within each group are close to unity for large number of data. Conditions for significant influence of the phase difference between the data onto the test functions are
Parametric conditions for stability of reduced-order linear time-varying control systems
NASA Technical Reports Server (NTRS)
Ma, C. C. H.; Vidyasagar, M.
1987-01-01
Using a single framework, parametric conditions are derived which encompass those for both local and global BIBO stability of a linear multivariable discrete-time reduced-order time-varying control system. These conditions indicate that the system will be BIBO stable if the norm of the system-parameter error matrix is bounded by an l exp 1 function superimposed on an l exp infinity function.
Parametric Analysis of a Hypersonic Inlet using Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Oliden, Daniel
For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study of turbulence models is presented and concludes that the k-kl-omega transition and SST transition turbulence model have the best correlation. Downstream of the flare's shockwave, good correlation is found for all boundary layer profiles, with some slight discrepancies of the static temperature near the surface. Simulated flow fields on a blunt cone with flare above Mach 10 are compared with experimental data from CUBRC LENS hypervelocity shock tunnel. Lack of vibrational non-equilibrium calculations causes discrepancies in heat flux near the leading edge. Temperature profiles, where non-equilibrium effects are dominant, are compared with the dissociation of molecules to show the effects of dissociation on static temperature. Following the validation studies is a parametric analysis of a hypersonic inlet from Mach 6 to 20. Compressor performance is investigated for numerous cowl leading edge locations up to speeds of Mach 10. The variable cowl study showed positive trends in compressor performance parameters for a range of Mach numbers that arise from maximizing the intake of compressed flow. An interesting phenomenon due to the change in shock wave formation for different Mach numbers developed inside the cowl that had a negative influence on the total pressure recovery. Investigation of the hypersonic inlet at different altitudes is performed to study the effects of Reynolds number, and consequently, turbulent viscous effects on compressor performance. Turbulent boundary layer separation was noted as the cause for a change in compressor performance parameters due to a change in Reynolds number. This effect would not be
Parametric analysis of a passive cyclic control device for helicopters
NASA Technical Reports Server (NTRS)
Kumagai, H.
1984-01-01
A parametric study of a passive device which provides a cyclic longitudinal control moment for a helicopter rotor was performed. It utilizes a rotor blade tip which is structurally decoupled from the blade inboard section. This rotor configuration is generally called the Free-Tip Rotor. A two dimensional numerical model was used to review the Constant Lift Tip Rotor, a predecessor of the current configuration, and then the same model was applied to the Passive Cyclic Control Device. The Constant Lift Tip was proven to have the ability to suppress the vibratory lift loading on the tip around the azimuth and to eliminate a significant negative lift peak on the advancing tip. The Passive Cyclic Control Device showed a once-per-revolution lift oscillation with a large amplitude, while minimizing the higher harmonic terms of the lift oscillation. This once-per-revolution oscillation results in the cyclic moment to trim the rotor longitudinally. A rotor performance analysis was performed by a three dimensional numerical model. It indicated that the vortices shed from the junction between the tip and the inboard section has a strong influence on the tip, and it may severely limit the tip performance. It was also shown that the Free-Tip allows the inboard section to have a larger twist, which results in a better performance.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Energy harvesting using parametric resonant system due to time-varying damping
NASA Astrophysics Data System (ADS)
Scapolan, Matteo; Tehrani, Maryam Ghandchi; Bonisoli, Elvio
2016-10-01
In this paper, the problem of energy harvesting is considered using an electromechanical oscillator. The energy harvester is modelled as a spring-mass-damper, in which the dissipated energy in the damper can be stored rather than wasted. Previous research provided the optimum damping parameter, to harvest maximum amount of energy, taking into account the stroke limit of the device. However, the amount of the maximum harvested energy is limited to a single frequency in which the device is tuned. Active and semi-active strategies have been suggested, which increases the performance of the harvester. Recently, nonlinear damping in the form of cubic damping has been proposed to extend the dynamic range of the harvester. In this paper, a periodic time-varying damper is introduced, which results in a parametrically excited system. When the frequency of the periodic time-varying damper is twice the excitation frequency, the system internal energy increases proportionally to the energy already stored in the system. Thus, for certain parametric damping values, the system can become unstable. This phenomenon can be exploited for energy harvesting. The transition curves, which separate the stable and unstable dynamics are derived, both analytically using harmonic balance method, and numerically using time simulations. The design of the harvester is such that its response is close to the transition curves of the Floquet diagram, leading to stable but resonant system. The performance of the parametric harvester is compared with the non-parametric one. It is demonstrated that performances and the frequency bandwidth in which the energy can be harvested can be both increased using time-varying damping.
SAT-Based (Parametric) Reachability for a Class of Distributed Time Petri Nets
NASA Astrophysics Data System (ADS)
Penczek, Wojciech; Pòłrola, Agata; Zbrzezny, Andrzej
Formal methods - among them the model checking techniques - play an important role in the design and production of both systems and software. In this paper we deal with an adaptation of the bounded model checking methods for timed systems, developed for timed automata, to the case of time Petri nets. We consider distributed time Petri nets and parametric reachability checking, but the approach can be easily adapted to verification of other kinds of properties for which the bounded model checking methods exist. A theoretical description is supported by some experimental results, generated using an extension of the model checker verICS.
NASA Astrophysics Data System (ADS)
Kraft, Manuel; Hein, Sven M.; Lehnert, Judith; Schöll, Eckehard; Hughes, Stephen; Knorr, Andreas
2016-08-01
Quantum coherent feedback control is a measurement-free control method fully preserving quantum coherence. In this paper we show how time-delayed quantum coherent feedback can be used to control the degree of squeezing in the output field of a cavity containing a degenerate parametric oscillator. We focus on the specific situation of Pyragas-type feedback control where time-delayed signals are fed back directly into the quantum system. Our results show how time-delayed feedback can enhance or decrease the degree of squeezing as a function of time delay and feedback strength.
NASA Astrophysics Data System (ADS)
Wu, Zhishen; Xu, Bin
2003-07-01
A structural parametric identification strategy based on neural networks algorithms using dynamic macro-strain measurements in time domain from a long-gage strain sensor by fiber optic sensing technique such as Fiber Bragg Grating (FBG) sensor is developed. An array of long-gage sensors is bounded on the structure to measure reliably and accurately macro-strains. By the proposed methodology, the structural parameter of stiffness can be identified. A beam model with known mass distribution is considered as an object structure. Without any eigenvalue analysis or optimization computation, the structural parameter of stiffness can be identified. First an emulator neural network is presented to identify the beam structure in current state. Free vibration macro-strain responses of the beam structure are used to train the emulator neural network. The trained emulator neural network can be used to forecast the free vibration macro-strain response of the beam structure with enough precision and decide the difference between the free vibration macro-strain responses of other assumed structure with different structural parameters and those of the original beam structure. The root mean square (RMS) error vector is presented to evaluate the difference. Subsequently, corresponding to each assumed structure with different structural parameters, the RMS error vector can be calculated. By using the training data set composed of the structural parameters and RMS error vector, a parametric evaluation neural network is trained. A beam structure is considered as an existing structure, based on the trained parametric evaluation neural network, the stiffness of the beam structure can be forecast. It is shown that the parametric identification strategy using macro-strain measurement from long-gage sensors has the potential of being a practical tool for a health monitoring methodology applied to civil engineering structures.
Rosenberg, D; Marino, R; Herbert, C; Pouquet, A
2016-01-01
We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.
Rosenberg, D; Marino, R; Herbert, C; Pouquet, A
2016-01-01
We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow. PMID:26830757
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
Ressler, Johann; Dirscherl, Andreas; Grothe, Helmut; Wolf, Bernhard
2007-02-01
In many cases of bioanalytical measurement, calculation of large amounts of data, analysis of complex signal waveforms or signal speed can overwhelm the performance of microcontrollers, analog electronic circuits or even PCs. One method to obtain results in real time is to apply a digital signal processor (DSP) for the analysis or processing of measurement data. In this paper we show how DSP-supported multiplying and accumulating (MAC) operations, such as time/frequency transformation, pattern recognition by correlation, convolution or filter algorithms, can optimize the processing of bioanalytical data. Discrete integral calculations are applied to the acquisition of impedance values as part of multi-parametric sensor chips, to pH monitoring using light-addressable potentiometric sensors (LAPS) and to the analysis of rapidly changing signal shapes, such as action potentials of cultured neuronal networks, as examples of DSP capability.
Theoretical analysis of terahertz parametric oscillator using KTiOPO4 crystal
NASA Astrophysics Data System (ADS)
Li, Zhongyang; Bing, Pibin; Yuan, Sheng
2016-08-01
Terahertz parametric oscillator (TPO) using KTiOPO4 (KTP) crystal with a noncollinear phase-matching scheme is investigated. Frequency tuning characteristics of terahertz wave (THz-wave) by varying the phase-matching angle and pump wavelength are analyzed. The expression of the effective parametric gain length under the noncollinear phase matching condition is deduced. Parametric gain and absorption characteristics of THz-wave in KTP are theoretically simulated for the first time. The characteristics of KTP for TPO are compared with MgO:LiNbO3. The analyses indicate that KTP is more suitable than MgO:LiNbO3 for TPO.
NASA Technical Reports Server (NTRS)
Pandya, Shishir; Chaderjian, Neal; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
A process is described which enables the generation of 35 time-dependent viscous solutions for a YAV-8B Harrier in ground effect in one week. Overset grids are used to model the complex geometry of the Harrier aircraft and the interaction of its jets with the ground plane and low-speed ambient flow. The time required to complete this parametric study is drastically reduced through the use of process automation, modern computational platforms, and parallel computing. Moreover, a dual-time-stepping algorithm is described which improves solution robustness. Unsteady flow visualization and a frequency domain analysis are also used to identify and correlated key flow structures with the time variation of lift.
Analysis of surface parametrizations for modern photometric stereo modeling
NASA Astrophysics Data System (ADS)
Mecca, Roberto; Rodolà, Emanuele; Cremers, Daniel
2015-04-01
Tridimensional shape recovery based on Photometric Stereo (PS) recently received a strong improvement due to new mathematical models based on partial differential irradiance equation ratios.1 This modern approach to PS faces more realistic physical effects among which light attenuation and radial light propagation from a point light source. Since the approximation of the surface is performed with single step method, accurate reconstruction is prevented by sensitiveness to noise. In this paper we analyse a well-known parametrization2 of the tridimensional surface extending it on any auxiliary convex projection functions. Experiments on synthetic data show preliminary results where more accurate reconstruction can be achieved using more suitable parametrization specially in case of noisy input images.
NASA Astrophysics Data System (ADS)
Cunningham, Robert K.; Waxman, Allen M.
1991-06-01
This is the first Annual Technical Summary of the MIT Lincoln Laboratory effort into the parametric study of diffusion-enhancement networks for spatiotemporal grouping in real-time artificial vision. Spatiotemporal grouping phenomena are examined in the context of static and time-varying imagery. Dynamics that exhibit static feature grouping on multiple scales as a function of time and long-range apparent motion between time-varying inputs are developed for a biologically plausible diffusion-enhancement bilayer. The architecture consists of a diffusion and a contrast-enhancement layer coupled by feedforward and feedback connections: input is provided by a separate feature extracting layer. The model is cast as an analog circuit that is realizable in VLSI, the parameters of which are selected to satisfy a psychophysical database on apparent motion. Specific topics include: neural networks, astrocyte glial networks, diffusion enhancement, long-range apparent motion, spatiotemporal grouping dynamics, and interference suppression.
Parametric analysis of a thermionic space nuclear power system
NASA Technical Reports Server (NTRS)
Strohmayer, W. H.; Van Hagan, T. H.
1987-01-01
Key parameters in the design of a thermionic space nuclear power system are identified and analysed in various system tradeoffs. The results are referenced to the thermionic system currently being studied for the SP-100 program. The SP-100 requirements provide definitive guidelines with respect to system optimization, the primary ones being the system mass limit of 3000 kg, the system volume constrraint of one-third of the Space Shuttle cargo bay, and the system lifetime of seven years. Many parametric influences are described and the methods used to optimize system design, in the context of the requirements, are indicated. Considerable design flexiblity is demonstrated.
Efficient parametric analysis of the chemical master equation through model order reduction
2012-01-01
Background Stochastic biochemical reaction networks are commonly modelled by the chemical master equation, and can be simulated as first order linear differential equations through a finite state projection. Due to the very high state space dimension of these equations, numerical simulations are computationally expensive. This is a particular problem for analysis tasks requiring repeated simulations for different parameter values. Such tasks are computationally expensive to the point of infeasibility with the chemical master equation. Results In this article, we apply parametric model order reduction techniques in order to construct accurate low-dimensional parametric models of the chemical master equation. These surrogate models can be used in various parametric analysis task such as identifiability analysis, parameter estimation, or sensitivity analysis. As biological examples, we consider two models for gene regulation networks, a bistable switch and a network displaying stochastic oscillations. Conclusions The results show that the parametric model reduction yields efficient models of stochastic biochemical reaction networks, and that these models can be useful for systems biology applications involving parametric analysis problems such as parameter exploration, optimization, estimation or sensitivity analysis. PMID:22748204
Inverse synthetic aperture radar processing using parametric time-frequency estimators Phase I
Candy, J.V., LLNL
1997-12-31
This report summarizes the work performed for the Office of the Chief of Naval Research (ONR) during the period of 1 September 1997 through 31 December 1997. The primary objective of this research was aimed at developing an alternative time-frequency approach which is recursive-in-time to be applied to the Inverse Synthethic Aperture Radar (ISAR) imaging problem discussed subsequently. Our short term (Phase I) goals were to: 1. Develop an ISAR stepped-frequency waveform (SFWF) radar simulator based on a point scatterer vehicular target model incorporating both translational and rotational motion; 2. Develop a parametric, recursive-in-time approach to the ISAR target imaging problem; 3. Apply the standard time-frequency short-term Fourier transform (STFT) estimator, initially to a synthesized data set; and 4. Initiate the development of the recursive algorithm. We have achieved all of these goals during the Phase I of the project and plan to complete the overall development, application and comparison of the parametric approach to other time-frequency estimators (STFT, etc.) on our synthesized vehicular data sets during the next phase of funding. It should also be noted that we developed a batch minimum variance translational motion compensation (TMC) algorithm to estimate the radial components of target motion (see Section IV). This algorithm is easily extended to recursive solution and will probably become part of the overall recursive processing approach to solve the ISAR imaging problem. Our goals for the continued effort are to: 1. Develop and extend a complex, recursive-in-time, time- frequency parameter estimator based on the recursive prediction error method (RPEM) using the underlying Gauss- Newton algorithms. 2. Apply the complex RPEM algorithm to synthesized ISAR data using the above simulator. 3. Compare the performance of the proposed algorithm to standard time-frequency estimators applied to the same data sets.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Parametric Studies of Square Solar Sails Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Sleight, David W.; Muheim, Danniella M.
2004-01-01
Parametric studies are performed on two generic square solar sail designs to identify parameters of interest. The studies are performed on systems-level models of full-scale solar sails, and include geometric nonlinearity and inertia relief, and use a Newton-Raphson scheme to apply sail pre-tensioning and solar pressure. Computational strategies and difficulties encountered during the analyses are also addressed. The purpose of this paper is not to compare the benefits of one sail design over the other. Instead, the results of the parametric studies may be used to identify general response trends, and areas of potential nonlinear structural interactions for future studies. The effects of sail size, sail membrane pre-stress, sail membrane thickness, and boom stiffness on the sail membrane and boom deformations, boom loads, and vibration frequencies are studied. Over the range of parameters studied, the maximum sail deflection and boom deformations are a nonlinear function of the sail properties. In general, the vibration frequencies and modes are closely spaced. For some vibration mode shapes, local deformation patterns that dominate the response are identified. These localized patterns are attributed to the presence of negative stresses in the sail membrane that are artifacts of the assumption of ignoring the effects of wrinkling in the modeling process, and are not believed to be physically meaningful. Over the range of parameters studied, several regions of potential nonlinear modal interaction are identified.
Non-parametric estimation of seasonal variations in GNSS-derived time series
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Bogusz, Janusz; Klos, Anna
2015-04-01
The seasonal variations in GNSS station's position may arise from geophysical excitations, thermal changes combined together with hydrodynamics or various errors which, when superimposed, cause the seasonal oscillations not exactly of real geodynamical origin, but still have to be included in time series modelling. These variations with different periods included in frequency band from Chandler up to quarter-annual ones will all affect the reliability of permanent station's velocity, which in turn, strictly influences the quality of kinematic reference frames. As shown before by a number of authors, the annual (dominant) sine curve, has the amplitude and phase that both change in time due to the different reasons. In this research we focused on the determination of annual changes in GNSS-derived time series of North, East and Up components. We used here the daily position changes from PPP (Precise Point Positioning) solution obtained by JPL (Jet Propulsion Laboratory) processed in the GIPSY-OASIS software. We analyzed here more than 140 globally distributed IGS stations with the minimum data length of 3 years. The longest time series were even 17 years long (1996-2014). Each of the topocentric time series (North, East and Up) was divided into years (from January to December), then the observations gathered in the same days of year were stacked and the weighted medians obtained for all of them such that each of time series was represented by matrix of size 365xn where n is the data length. In this way we obtained the median annual signal for each of analyzed stations that was then decomposed into different frequency bands using wavelet decomposition with Meyer wavelet. We assumed here 7 levels of decomposition, with annual curve as the last approximation of it. The signal approximations made us to obtain the seasonal peaks that prevail in North, East and Up data for globally distributed stations. The analysis of annual curves, by means of non-parametric estimation
Real-time tuning of a double quantum dot using a Josephson parametric amplifier
NASA Astrophysics Data System (ADS)
Stehlik, J.; Liu, Y.-Y.; Quintana, C. M.; Eichler, C.; Hartke, T. R.; Petta, J. R.
Josephson parametric amplifiers (JPAs) have enabled advances in readout of quantum systems. Here we demonstrate JPA-assisted readout of a cavity-coupled double quantum dot (DQD). Utilizing a JPA we improve the signal-to-noise ratio (SNR) by a factor of 2000 compared to the situation with the parametric amplifier turned off. At an interdot charge transition we achieve a SNR of 76 (19 dB) with an integration time τ = 400 ns, which is limited by the linewidth of our cavity. By measuring the SNR as a function of τ we extract an equivalent charge sensitivity of 8 ×10-5 e /√{ Hz} . We develop a dual-gate-voltage rastering scheme that allows us to acquire a DQD charge stability diagram in just 20 ms. Such rapid data acquisition rates enable device tuning in live ``video-mode,'' where the results of parameter changes are immediately displayed. Live tuning allows the DQD confinement potential to be rapidly tuned, a capability that will become increasingly important as semiconductor spin qubits are scaled to a larger number of dots. Research is supported by the Packard Foundation, ARO Grant No. W911NF-15-1-0149, DARPA QuEST Grant No. HR0011-09-1-0007, and the NSF (Grants No. DMR-1409556 and DMR-1420541).
Parametric analysis of EMP induced overvoltages on power lines
Millard, D.P.; Meliopoulos, A.P. ); Cokkinides, G.J. )
1988-07-01
This paper presents parametric results of EMP induced overvoltages on overhead transmission lines. The results have been obtained with an analytical model of an overhead transmission line which accounts for (1) frequency dependent characteristics of lines, (2) EMP coupling to the line conductors, (3) EMP coupling to tower structures, and (4) the grounding structures of the transmission towers. The results indicate that shield or neutral conductors and transmission tower grounding drastically reduce the EMP induced overvoltages on transmission towers. For distribution overhead circuits, the EMP induced overvoltages may be above the basic insulation level. For transmission circuits, it is possible to select design parameters such as to assure that EMP induced overvoltages are below the insulation level of the line.
NASA Astrophysics Data System (ADS)
Prasad, Narasimha Srikantaiah
parametric interaction is analytically formulated and experimentally demonstrated using Nd:MgO:LiNbO_3. The results obtained form a sound basis to subsequent analysis of parametric interaction by a pump radiation that is generated internally in the same crystal. Using laser theory and principles of optical parametric interaction, the theory of self-pumped optical parametric interaction is formulated. This encompasses, the requirements of an interaction medium, laser pump generation, Q-switching, cavity analysis, and conditions for parametric interaction. Driven by an internally generated laser pump, the specific processes of optical parametic amplification, optical parametric oscillation, and frequency up-conversion are explored. In this study, novel tuning techniques are considered and spectral performance characteristics of these devices are presented. The design architectures of self-pumped OPO, OPA, and frequency up-converter devices using Nd:MgO:LiNbO _3 crystals are described. It is envisaged that self-pumped parametric devices can outperform present day intra-cavity devices which are bulky and expensive.
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
NASA Astrophysics Data System (ADS)
Ruiz Dominguez, C.; Kachenoura, N.; DeCesare, A.; Delouche, A.; Lim, P.; Gérard, O.; Herment, A.; Diebold, B.; Frouin, F.
2005-07-01
The computerized study of the regional contraction of the left ventricle has undergone numerous developments, particularly in relation to echocardiography. A new method, parametric analysis of main motion (PAMM), is proposed in order to synthesize the information contained in a cine loop of images in parametric images. PAMM determines, for the intensity variation time curves (IVTC) observed in each pixel, two amplitude coefficients characterizing the continuous component and the alternating component; the variable component is generated from a mother curve by introducing a time shift coefficient and a scale coefficient. Two approaches, a PAMM data driven and a PAMM model driven (simpler and faster), are proposed. On the basis of the four coefficients, an amplitude image and an image of mean contraction time are synthesized and interpreted by a cardiologist. In all cases, both PAMM methods allow better IVTC adjustment than the other methods of parametric imaging used in echocardiography. A preliminary database comprising 70 segments is scored and compared with the visual analysis, taken from a consensus of two expert interpreters. The levels of absolute and relative concordance are 79% and 97%. PAMM model driven is a promising method for the rapid detection of abnormalities in left ventricle contraction.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
NASA Astrophysics Data System (ADS)
Wei, Dang; Qing, Liao; Peng-Cheng, Mao; Hong-Bing, Fu; Yu-Xiang, Weng
2016-05-01
Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spectrum correction function. These advantages should benefit the study of coherent emission, such as measurement of lasing dynamics. In this letter, the FNOPAS was used to trace the lasing process in Rhodamine 6G (R6G) solution and organic semiconductor nano-wires. High-quality transient emission spectra and lasing dynamic traces were acquired, which demonstrates the applicability of FNOPAS in the study of lasing dynamics. Our work extends the application scope of the FNOPAS technique. Project supported by the National Natural Science Foundation of China (Grant Nos. 20925313 and 21503066), the Innovation Program of Chinese Academy of Sciences (Grant No. KJCX2-YW-W25), the Postdoctoral Project of Hebei University, China, and the Project of Science and Technology Bureau of Baoding City, China (Grant No. 15ZG029).
Network of time-multiplexed optical parametric oscillators as a coherent Ising machine
NASA Astrophysics Data System (ADS)
Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa
2014-12-01
Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.
Lunar lander configuration study and parametric performance analysis
NASA Technical Reports Server (NTRS)
Donahue, Benjamin B.; Fowler, C. R.
1993-01-01
Future Lunar exploration plans will call for delivery of significant mounts or cargo to provide for crew habitation, surface tansportation, and scientific exploration activities. Minimization of costly surface based infrastructure is in large part directly related to the design of the cargo delivery/landing craft. This study focused on evaluating Lunar lander concepts from a logistics oriented perspective, and outlines the approach used in the development of a preferred configuration, sets forth the benefits derived from its utilization and describes the missions and system considered. Results indicate that only direct-to-surface downloading of payloads provides for unassisted cargo removal operations imperative to efficient and low risk site buildup, including the emplacement of Space Station derivative surface habitat modules, immediate cargo jettison for both descent abort and emergency surface ascent essential to piloted missions carrying cargo, and short habitat egress/ingress paths necessary to productive surface work tours for crew members carrying hand held experiments, tools and other bulky articles. Accommodating cargo in a position underneath the vehicles structural frame, landing craft described herein eliminate altogether the necessity for dedicated surface based off-loading vehicles, the operations and maintenance associated with their operation, and the precipitous ladder climbs to and from the surface that are inherent to traditional designs. Parametric evaluations illustrate performance and mass variation with respect to mission requirements.
Lunar lander configuration study and parametric performance analysis
NASA Astrophysics Data System (ADS)
Donahue, Benjamin B.; Fowler, C. R.
1993-06-01
Future Lunar exploration plans will call for delivery of significant mounts or cargo to provide for crew habitation, surface tansportation, and scientific exploration activities. Minimization of costly surface based infrastructure is in large part directly related to the design of the cargo delivery/landing craft. This study focused on evaluating Lunar lander concepts from a logistics oriented perspective, and outlines the approach used in the development of a preferred configuration, sets forth the benefits derived from its utilization and describes the missions and system considered. Results indicate that only direct-to-surface downloading of payloads provides for unassisted cargo removal operations imperative to efficient and low risk site buildup, including the emplacement of Space Station derivative surface habitat modules, immediate cargo jettison for both descent abort and emergency surface ascent essential to piloted missions carrying cargo, and short habitat egress/ingress paths necessary to productive surface work tours for crew members carrying hand held experiments, tools and other bulky articles. Accommodating cargo in a position underneath the vehicles structural frame, landing craft described herein eliminate altogether the necessity for dedicated surface based off-loading vehicles, the operations and maintenance associated with their operation, and the precipitous ladder climbs to and from the surface that are inherent to traditional designs. Parametric evaluations illustrate performance and mass variation with respect to mission requirements.
NASA Technical Reports Server (NTRS)
Marston, C. H.; Alyea, F. N.; Bender, D. J.; Davis, L. K.; Dellinger, T. C.; Hnat, J. G.; Komito, E. H.; Peterson, C. A.; Rogers, D. A.; Roman, A. J.
1980-01-01
The performance and cost of moderate technology coal-fired open cycle MHD/steam power plant designs which can be expected to require a shorter development time and have a lower development cost than previously considered mature OCMHD/steam plants were determined. Three base cases were considered: an indirectly-fired high temperature air heater (HTAH) subsystem delivering air at 2700 F, fired by a state of the art atmospheric pressure gasifier, and the HTAH subsystem was deleted and oxygen enrichment was used to obtain requisite MHD combustion temperature. Coal pile to bus bar efficiencies in ease case 1 ranged from 41.4% to 42.9%, and cost of electricity (COE) was highest of the three base cases. For base case 2 the efficiency range was 42.0% to 45.6%, and COE was lowest. For base case 3 the efficiency range was 42.9% to 44.4%, and COE was intermediate. The best parametric cases in bases cases 2 and 3 are recommended for conceptual design. Eventual choice between these approaches is dependent on further evaluation of the tradeoffs among HTAH development risk, O2 plant integration, and further refinements of comparative costs.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects.
Reluctance network analysis of an orthogonal-core type parametric induction motor
Tajima, Katsubumi; Sato, Kohei; Komukai, Toshihiko; Ichinokura, Osamu
1999-09-01
In this paper, an analytical method of an orthogonal-core type parametric induction motor is proposed, based on a reluctance network model of the stator. The model is derived by a similar technique applied to an orthogonal-core transformer. Using this model the parametric oscillation characteristic of the motor, without a rotor, is computed. The simulation results agree well with the experiments. It is obvious that the analytical model of the stator presented here is proper for analysis of the motor and that, by use of this model and suitable analytical model of the rotor, the motor characteristics can be analyzed.
Analysis of 808nm centered optical parametric chirped pulse amplifier based on DKDP crystals
NASA Astrophysics Data System (ADS)
Sun, Meizhi; Cui, Zijian; Kang, Jun; Zhang, Yanli; Zhang, Junyong; Cui, Ying; Xie, Xinglong; Liu, Cheng; Liu, Dean; Zhu, Jianqiang; Lin, Zunqi
2015-08-01
The non-collinear phase-matching in Potassium Dideuterium Phosphate (DKDP) crystal is analyzed in detail with signal pulse of center wavelength at 808 nm and pump pulse of wavelength at 526.5 nm. By numerical analysis, parametric bandwidths for various DKDP crystals of different deuteration level are presented. In particularly for DKDP crystals of 95% deuteration level, the optimal non-collinear angles, phase-matching angles, parametric bandwidths, walk-off angles, acceptance angles, efficiency coefficients, gain and gain bandwidths are provided based on the parameter concepts. Optical parametric chirped pulse amplifier based on DKDP crystal is designed and the output characteristics are simulated by OPA coupled wave equations for further discuss. It is concluded that DKDP crystals higher than 90% deuteration level can be utilized in ultra-short high power laser systems with compressed pulses broader than 30fs. The disadvantage is that the acceptance angles are small, increasing the difficulty of engineering regulation.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
A Comparison of Parametric and Nonparametric Approaches to Item Analysis for Multiple-Choice Tests
ERIC Educational Resources Information Center
Lei, Pui-Wa; Dunbar, Stephen B.; Kolen, Michael J.
2004-01-01
This study compares the parametric multiple-choice model and the nonparametric kernel smoothing approach to estimating option characteristic functions (OCCs) using an empirical criterion, the stability of curve estimates over occasions that represents random error. The potential utility of graphical OCCs in item analysis was illustrated with…
Augustine, C.
2013-10-01
Parametric analysis of the factors controlling the costs of sedimentary geothermal systems was carried out using a modified version of the Geothermal Electricity Technology Evaluation Model (GETEM). The sedimentary system modeled assumed production from and injection into a single sedimentary formation.
A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2003-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.
Feedback interventions and driving speed: A parametric and comparative analysis
Houten, Ron Van; Nau, Paul A.
1983-01-01
Five experiments were conducted to assess the effects of several variables on the efficacy of feedback in reducing driving speed. Experiment 1 systematically varied the criterion used to define speeding, and results showed that the use of a lenient criterion (20 km/hr over the speed limit), which allowed for the posting of high percentages of drivers not speeding, was more effective in reducing speeding than the use of a stringent criterion (10 km/hr over the speed limit). In Experiment 2 an analysis revealed that posting feedback reduced speeding on a limited access highway and the effects persisted to some degree up to 6 km. Experiments 3 and 4 compared the effectiveness of an unmanned parked police vehicle (Experiment 3) and a police air patrol speeding program (Experiment 4) with the feedback sign and determined whether the presence of either of these enforcement variables could potentiate the efficacy of the sign. The results of both experiments demonstrated that although the two enforcement programs initially produced larger effects than the feedback sign, the magnitude of their effect attenuated over time. Experiment 5 compared the effectiveness of a traditional enforcement program with a warning program which included handing out a flier providing feedback on the number and types of accidents occuring on the road during the past year. This experiment demonstrated that the warning program produced a marked reduction in speeding and the traditional enforcement program did not. Furthermore, the warning program and a feedback sign together produced an even greater reduction in speeding than either alone. PMID:16795666
A parametric study of nonlinear seismic response analysis of transmission line structures.
Tian, Li; Wang, Yanming; Yi, Zhenhua; Qian, Hui
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures.
Haque, Md Mazharul; Washington, Simon
2014-01-01
The use of mobile phones while driving is more prevalent among young drivers-a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q advanced driving simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver's peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21-26 years old and split evenly by gender. Drivers' reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver's age, license type (provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted
NASA Astrophysics Data System (ADS)
David, William I. F.; Evans, John S. O.
The rapidity with which powder diffraction data may be collected, not only at neutron and X-ray synchrotron facilities but also in the laboratory, means that the collection of a single diffraction pattern is now the exception rather than the rule. Many experiments involve the collection of hundreds and perhaps many thousands of datasets where a parameter such as temperature or pressure is varied or where time is the variable and life-cycle, synthesis or decomposition processes are monitored or three-dimensional space is scanned and the three-dimensional internal structure of an object is elucidated. In this paper, the origins of parametric diffraction are discussed and the techniques and challenges of parametric powder diffraction analysis are presented. The first parametric measurements were performed around 50 years ago with the development of a modified Guinier camera but it was the automation afforded by neutron diffraction combined with increases in computer speed and memory that established parametric diffraction on a strong footing initially at the ILL, Grenoble in France. The theoretical parameterisation of quantities such as lattice constants and atomic displacement parameters will be discussed and selected examples of parametric diffraction over the past 20 years will be reviewed that highlight the power of the technique.
NASA Astrophysics Data System (ADS)
Zhang, Lin; Zhang, Weiping
2016-10-01
A variety of dynamics in nature and society can be approximately treated as a driven and damped parametric oscillator. An intensive investigation of this time-dependent model from an algebraic point of view provides a consistent method to resolve the classical dynamics and the quantum evolution in order to understand the time-dependent phenomena that occur not only in the macroscopic classical scale for the synchronized behaviors but also in the microscopic quantum scale for a coherent state evolution. By using a Floquet U-transformation on a general time-dependent quadratic Hamiltonian, we exactly solve the dynamic behaviors of a driven and damped parametric oscillator to obtain the optimal solutions by means of invariant parameters of Ks to combine with Lewis-Riesenfeld invariant method. This approach can discriminate the external dynamics from the internal evolution of a wave packet by producing independent parametric equations that dramatically facilitate the parametric control on the quantum state evolution in a dissipative system. In order to show the advantages of this method, several time-dependent models proposed in the quantum control field are analyzed in detail.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded. PMID:26847949
Parametric analysis of synthetic aperture radar data for the study of forest stand characteristics
NASA Technical Reports Server (NTRS)
Wu, Shih-Tseng
1988-01-01
A parametric analysis of a Gulf Coast forest stand was performed using multipolarization, multipath airborne SAR data, and forest plot properties. Allometric equations were used to compute the biomass and basal area for the test plots. A multiple regression analysis with stepwise selection of independent variables was performed. It is found that forest stand characteristics such as biomass, basal area, and average tree height are correlated with SAR data.
NASA Astrophysics Data System (ADS)
Hoa Nguyen, Dinh; Banjerdpongchai, David
2010-12-01
This article presents a novel robust iterative learning control algorithm (ILC) for linear systems in the presence of multiple time-invariant parametric uncertainties.The robust design problem is formulated as a min-max problem with a quadratic performance criterion subject to constraints of the iterative control input update. Then, we propose a new methodology to find a sub-optimal solution of the min-max problem. By finding an upper bound of the worst-case performance, the min-max problem is relaxed to be a minimisation problem. Applying Lagrangian duality to this minimisation problem leads to a dual problem which can be reformulated as a convex optimisation problem over linear matrix inequalities (LMIs). An LMI-based ILC algorithm is given afterward and the convergence of the control input as well as the system error are proved. Finally, we apply the proposed ILC to a generic example and a distillation column. The numerical results reveal the effectiveness of the LMI-based algorithm.
NASA Astrophysics Data System (ADS)
Wei, Dang; Qing, Liao; Peng-Cheng, Mao; Hong-Bing, Fu; Yu-Xiang, Weng
2016-05-01
Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spectrum correction function. These advantages should benefit the study of coherent emission, such as measurement of lasing dynamics. In this letter, the FNOPAS was used to trace the lasing process in Rhodamine 6G (R6G) solution and organic semiconductor nano-wires. High-quality transient emission spectra and lasing dynamic traces were acquired, which demonstrates the applicability of FNOPAS in the study of lasing dynamics. Our work extends the application scope of the FNOPAS technique. Project supported by the National Natural Science Foundation of China (Grant Nos. 20925313 and 21503066), the Innovation Program of Chinese Academy of Sciences (Grant No. KJCX2-YW-W25), the Postdoctoral Project of Hebei University, China, and the Project of Science and Technology Bureau of Baoding City, China (Grant No. 15ZG029).
Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory J.
2014-01-01
PURPOSE: The identification of tumor pathologic characteristics is an important part of breast cancer diagnosis, prognosis, and treatment planning but currently requires biopsy as its standard. Here, we investigated a noninvasive quantitative ultrasound method for the characterization of breast tumors in terms of their histologic grade, which can be used with clinical diagnostic ultrasound data. METHODS: Tumors of 57 locally advanced breast cancer patients were analyzed as part of this study. Seven quantitative ultrasound parameters were determined from each tumor region from the radiofrequency data, including mid-band fit, spectral slope, 0-MHz intercept, scatterer spacing, attenuation coefficient estimate, average scatterer diameter, and average acoustic concentration. Parametric maps were generated corresponding to the region of interest, from which four textural features, including contrast, energy, homogeneity, and correlation, were determined as further tumor characterization parameters. Data were examined on the basis of tumor subtypes based on histologic grade (grade I versus grade II to III). RESULTS: Linear discriminant analysis of the means of the parametric maps resulted in classification accuracy of 79%. On the other hand, the linear combination of the texture features of the parametric maps resulted in classification accuracy of 82%. Finally, when both the means and textures of the parametric maps were combined, the best classification accuracy was obtained (86%). CONCLUSIONS: Textural characteristics of quantitative ultrasound spectral parametric maps provided discriminant information about different types of breast tumors. The use of texture features significantly improved the results of ultrasonic tumor characterization compared to conventional mean values. Thus, this study suggests that texture-based quantitative ultrasound analysis of in vivo breast tumors can provide complementary diagnostic information about tumor histologic characteristics
Multilevel Latent Class Analysis: Parametric and Nonparametric Models
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2014-01-01
Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…
Parametric analysis of lunar resources for space energy systems
NASA Astrophysics Data System (ADS)
Woodcock, Gordon R.
The possible use of lunar resource in the construction of solar power satellites (SPS) to provide energy for use on earth is discussed. The space transportation and operational aspects of the SPS program are compared to other energy concepts. Cost/benefit analysis are used to study the advantages of using lunar oxygen for the SPS program and producing helium-3 on the moon. Options for lunar surface power are considered and the economic benefits of using lunar resources are examined.
Bifurcation analysis of parametrically excited bipolar disorder model
NASA Astrophysics Data System (ADS)
Nana, Laurent
2009-02-01
Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.
Parametric analysis of a shape memory alloy actuated arm
NASA Astrophysics Data System (ADS)
Wright, Cody; Bilgen, Onur
2016-04-01
Using a pair of antagonistic Shape Memory Allow (SMA) wires, it may be possible to produce a mechanism that replicates human musculoskeletal movement. The movement of interest is the articulation of the elbow joint actuated by the biceps brachii muscle. In an effort to understand the bio-mechanics of the arm, a single degree of freedom crankslider mechanism is used to model the movement of the arm induced by the biceps brachii muscle. First, a purely kinematical analysis is performed on a rigid body crank-slider. Force analysis is also done modeling the muscle as a simple linear spring. Torque, rocking angle, and energy are calculated for a range of crank-slider geometries. The SMA wire characteristics are experimentally determined for the martensite detwinned and full austenite phases. Using the experimental data, an idealized actuator characteristic curve is produced for the SMA wire. Kinematic and force analyses are performed on the nonlinear wire characteristic curve and a linearized wire curve; both cases are applied to the crankslider mechanism. Performance metrics for both cases are compared, followed by discussion.
Femtosecond optical parametric oscillators toward real-time dual-comb spectroscopy
NASA Astrophysics Data System (ADS)
Jin, Yuwei; Cristescu, Simona M.; Harren, Frans J. M.; Mandon, Julien
2015-04-01
We demonstrate mid-infrared dual-comb spectroscopy with an optical parametric oscillator (OPO) toward real-time field measurement. A singly resonant OPO based on a MgO-doped periodically poled lithium niobate (PPLN) crystal is demonstrated. Chirped mirrors are used to compensate the dispersion caused by the optical cavity and the crystal. A low threshold of 17 mW has been achieved. The OPO source generates a tunable idler frequency comb between 2.7 and 4.7 μm. Dual-comb spectroscopy is achieved by coupling two identical Yb-fiber mode-locked lasers to this OPO with slightly different repetition frequencies. A measured absorption spectrum of methane is presented with a spectral bandwidth of , giving an instrumental resolution of . In addition, a second OPO containing two MgO-doped PPLN crystals in a singly resonant ring cavity is demonstrated. As such, this OPO generates two idler combs (average power up to 220 mW), covering a wavelength range between 2.7 and 4.2 μm, from which a mid-infrared dual-comb Fourier transform spectrometer is constructed. By detecting the heterodyned signal between the two idler combs, broadband spectra of molecular gases can be observed over a spectral bandwidth of more than . This special cavity design allows the spectral resolution to be improved to without locking the OPO cavity, indicating that this OPO represents an ideal high-power broadband mid-infrared source for real-time gas sensing.
Aerodynamic canard/wing parametric analysis for general aviation applications
NASA Technical Reports Server (NTRS)
Keith, M. W.; Selberg, B. P.
1984-01-01
Vortex panel and vortex lattice methods have been utilized in an analytic study to determine the two- and three-dimensional aerodynamic behavior of canard and wing configurations. The purpose was to generate data useful for the design of general aviation canard aircraft. Essentially no two-dimensional coupling was encountered and the vertical distance between the lifting surfaces was found to be the main contributor to interference effects of the three-dimensional analysis. All canard configurations were less efficient than a forward wing with an aft horizontal tail, but were less sensitive to off-optimum division of total lift between the two surfaces, such that trim drag could be less for canard configurations. For designing a general aviation canard aircraft, results point toward large horizontal and vertical distance between the canard and wing, a large wing-to-canard area ratio, and with the canard at a low incidence angle relative to the wing.
Parametric analysis of thermal stratification during the Monju turbine trip test
Sofu, T.
2012-07-01
CFD-based simulation techniques are evaluated using a simplified symmetric Monju model to study multi-dimensional mixing and heat transfer in the upper plenum during a turbine trip test. When the test starts and core outlet temperatures drop due to reactor shutdown, the cooler sodium is trapped near the bottom of the vessel and the hotter (less dense) primary sodium at the higher elevations stays largely stagnant for an extended period of time inhibiting natural circulation. However, the secondary flow through a set of holes on the inner barrel bypasses the thermally stratified region as a shorter path to the intermediate heat exchanger and improves the natural circulation flow rate to cool the core. The calculations with strict adherence to benchmark specifications predict a much shorter duration for thermal stratification in the upper plenum than the experimental data indicates. In this paper, the results of a parametric analysis are presented to address this discrepancy. Specifically, the role of the holes on the inner barrel is reassessed in terms of their ability to provide larger by-pass flow. Assuming inner barrel holes with rounded edge produces results more consistent with the experiments. (authors)
Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics
Tsourtis, Anastasios; Pantazis, Yannis Katsoulakis, Markos A.; Harmandaris, Vagelis
2015-07-07
In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.
Thermal hydraulic limits analysis using statistical propagation of parametric uncertainties
Chiang, K. Y.; Hu, L. W.; Forget, B.
2012-07-01
The MIT Research Reactor (MITR) is evaluating the conversion from highly enriched uranium (HEU) to low enrichment uranium (LEU) fuel. In addition to the fuel element re-design, a reactor power upgraded from 6 MW to 7 MW is proposed in order to maintain the same reactor performance of the HEU core. Previous approach in analyzing the impact of engineering uncertainties on thermal hydraulic limits via the use of engineering hot channel factors (EHCFs) was unable to explicitly quantify the uncertainty and confidence level in reactor parameters. The objective of this study is to develop a methodology for MITR thermal hydraulic limits analysis by statistically combining engineering uncertainties with an aim to eliminate unnecessary conservatism inherent in traditional analyses. This method was employed to analyze the Limiting Safety System Settings (LSSS) for the MITR, which is the avoidance of the onset of nucleate boiling (ONB). Key parameters, such as coolant channel tolerances and heat transfer coefficients, were considered as normal distributions using Oracle Crystal Ball to calculate ONB. The LSSS power is determined with 99.7% confidence level. The LSSS power calculated using this new methodology is 9.1 MW, based on core outlet coolant temperature of 60 deg. C, and primary coolant flow rate of 1800 gpm, compared to 8.3 MW obtained from the analytical method using the EHCFs with same operating conditions. The same methodology was also used to calculate the safety limit (SL) for the MITR, conservatively determined using onset of flow instability (OFI) as the criterion, to verify that adequate safety margin exists between LSSS and SL. The calculated SL is 10.6 MW, which is 1.5 MW higher than LSSS. (authors)
Fuel processors for automotive fuel cell systems : a parametric analysis.
Doss, E. D.; Kumar, R.; Ahluwalia, R. K.; Krumpelt, M.
2001-12-15
An autothermally-reformed, gasoline-fueled automotive polymer electrolyte fuel cell (PEFC) system has been modeled and analyzed for the fuel processor and total system performance. The purpose of the study is to identify the influence of various operating parameters on the system performance and to investigate related tradeoff scenarios. Results of steady-state analyses at the design rated power level are presented and discussed. The effects of the following parameters are included in the analysis: operating pressure (3 and 1 atm), reforming temperature (1000-1300 K), water-to-fuel and air-to-fuel reactant feed ratios, electrochemical fuel utilization, and thermal integration of the fuel processor and the fuel cell stack subsystems. The analyses are also used to evaluate the impact of those parameters on the concentrations of methane and carbon monoxide in the processed reformate. Both of these gases can be reduced to low levels with adequate water-to-carbon used in the fuel processor. Since these two species represent corresponding amounts of hydrogen that would not be available for electrochemical oxidation in the fuel cell stack, it is important to maintain them at low levels. Subject to the assumptions used in the analyses, particularly that of thermodynamic equilibrium, it was determined that reforming temperatures of 1100 K, a water-to-carbon mole ratio of 1.5-2.5, and the use of fuel cell exhaust energy in the fuel processor subsystem can yield fuel processor efficiencies of 82-84%, and total system efficiencies of 40-42% can be achieved. For the atmospheric pressure system, if the exhaust energy is not used in the fuel processor subsystem, the fuel processor efficiency would drop to 75-82% and the total system efficiency would drop below 40%. At higher reforming temperatures, say 1300 K, the fuel processor efficiency would decrease to 78%, and the total system efficiency would drop below 39%, even with the use of the fuel cell stack exhaust energy.
Parametric analysis of the end face engagement worm gear
NASA Astrophysics Data System (ADS)
Deng, Xingqiao; Wang, Jueling; Wang, Jinge; Chen, Shouan; Yang, Jie
2015-11-01
A novel specific type of worm drive, so-called end face engagement worm gear(EFEWD), is originally presented to minimize or overcome the gear backlash. Different factors, including the three different types, contact curves, tooth profile, lubrication angle and the induced normal curvature are taken into account to investigate the meshing characteristics and create the profile of a novel specific type of worm drive through mathematical models and theoretical analysis. The tooth of the worm wheel is very specific with the sine-shaped tooth which is located at the alveolus of the worm and the tooth profile of a worm is generated by the meshing movement of the worm wheel with the sine-shaped tooth, but just the end face of the worm(with three different typical meshing types) is adapted to meshing, and therefore an extraordinary manufacturing methods is used to generate the profile of the end face engagement worm. The research results indicates that the bearing contacts of the generated conjugate hourglass worm gear set are in line contacts, with certain advantages of no-backlash, high precision and high operating efficiency over other gears and gear systems besides the end face engagement worm gear drive may improve bearing contact, reduce the level of transmission errors and lessen the sensitivity to errors of alignment. Also, the end face engagement worm can be easily made with superior meshing and lubrication performance compared with the conventional techniques. In particular, the meshing and lubrication performance of the end face engagement worm gear by using the end face to meshing can be increased over 10% and 7%, respectively. This investigate is expect to provide a new insight on the design of the future no-backlash worm drive for industry.
'nparACT' package for R: A free software tool for the non-parametric analysis of actigraphy data.
Blume, Christine; Santhi, Nayantara; Schabus, Manuel
2016-01-01
For many studies, participants' sleep-wake patterns are monitored and recorded prior to, during and following an experimental or clinical intervention using actigraphy, i.e. the recording of data generated by movements. Often, these data are merely inspected visually without computation of descriptive parameters, in part due to the lack of user-friendly software. To address this deficit, we developed a package for R Core Team [6], that allows computing several non-parametric measures from actigraphy data. Specifically, it computes the interdaily stability (IS), intradaily variability (IV) and relative amplitude (RA) of activity and gives the start times and average activity values of M10 (i.e. the ten hours with maximal activity) and L5 (i.e. the five hours with least activity). Two functions compute these 'classical' parameters and handle either single or multiple files. Two other functions additionally allow computing an L-value (i.e. the least activity value) for a user-defined time span termed 'Lflex' value. A plotting option is included in all functions. The package can be downloaded from the Comprehensive R Archives Network (CRAN). •The package 'nparACT' for R serves the non-parametric analysis of actigraphy data.•Computed parameters include interdaily stability (IS), intradaily variability (IV) and relative amplitude (RA) as well as start times and average activity during the 10 h with maximal and the 5 h with minimal activity (i.e. M10 and L5).
NASA Astrophysics Data System (ADS)
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1996-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Performance evaluation and parametric analysis on cantilevered ramp injector in supersonic flows
NASA Astrophysics Data System (ADS)
Huang, Wei; Li, Shi-bin; Yan, Li; Wang, Zhen-guo
2013-03-01
The cantilevered ramp injector is one of the most promising candidates for the mixing enhancement between the fuel and the supersonic air, and its parametric analysis has drawn an increasing attention of researchers. The flow field characteristics and the drag force of the cantilevered ramp injector in the supersonic flow with the freestream Mach number 2.0 have been investigated numerically, and the predicted injectant mole fraction and static pressure profiles have been compared with the available experimental data in the open literature. At the same time, the grid independency analysis has been performed by using the coarse, the moderate and the refined grid scales, and the influence of the turbulence model on the flow field of the cantilevered ramp injector has been carried on as well. Further, the effects of the swept angle, the ramp angle and the length of the step on the performance of the cantilevered ramp injector have been discussed subsequently. The obtained results show that the grid scale has only a slight impact on the flow field of the cantilevered ramp injector except in the region near the fuel injector, and the predicted results show reasonable agreement with the experimental data. Additionally, the turbulence model makes a slight difference to the numerical results, and the results obtained by the RNG k-ɛ and SST k-ω turbulence models are almost the same. The swept angle and the ramp angle have the same impact on the performance of the cantilevered ramp injector, and the kidney-shaped plume is formed with shorter distance with the increase of the swept and ramp angles. At the same time, the shape of the injectant mole fraction contour at X/H=6 goes through a transition from a peach-shaped plume to a kidney-shaped plume, and the cantilevered ramp injector with larger swept and ramp angles has the higher mixing efficiency and the larger drag force. The length of the step has only a slight impact on the drag force performance of the cantilevered
A parametric family of correlation structures for the analysis of longitudinal data.
Muñoz, A; Carey, V; Schouten, J P; Segal, M; Rosner, B
1992-09-01
In epidemiological settings, we are often faced with numerous short time series, and a parsimonious parametrization of the correlation structure is desired in order to optimize the efficiency of the estimation procedure. We propose a damped exponential correlation structure for modeling multivariate Gaussian outcomes. The correlation between two observations separated by s units of time is modeled as gamma s theta, where gamma is the correlation between elements separated by one s-unit, and theta is a damping parameter. For (theta = 0), (theta = 1), and theta----infinity), the correlation structures of compound symmetry, first-order autoregressive, and first-order moving average processes are obtained. Although the AR(2) dependency structure, and the combination of random effects and AR(1) errors are not special cases of the proposed parametric family, these structures can be well approximated within the family for short time series. Maximum likelihood methods for parameter estimation and interpretations of intermediate models (0 less than theta less than 1) are discussed in the context of modeling pulmonary function in an adult population in The Netherlands and T-cell subsets in homosexual men infected with human immunodeficiency virus Type I.
Parametric performance analysis of OTEC system using HFC32/HFC134a mixtures
Uehara, Haruo; Ikegami, Yasuyuki
1995-11-01
Parametric performance analysis is performed on an Ocean Thermal Energy Conversion (OTEC) system using HFC32/HFC134a mixtures as working fluid. The analyzed OTEC system uses the Kalina cycle. The parameters in the performance analysis consist of the warm sea water inlet temperature, the cold sea water inlet temperature, the heat transfer performance of the evaporator, condenser and regenerator, the turbine inlet pressure, the turbine inlet temperature, the molar fraction of HFC32. Effects of these various parameters on the efficiency of the Kalina cycle using HFC32/HFC134a mixtures are clarified by using this analysis, and compared with calculation results using ammonia/water mixtures as working fluid. The thermal efficiency of OTEC system using the Kalina cycle can reach up to about 5 percent with an inlet warm sea water temperature of 28 C and an inlet cold sea water temperature of 4 C.
A Semi-parametric Bayesian Approach for Differential Expression Analysis of RNA-seq Data
Liu, Fangfang; Wang, Chong
2016-01-01
RNA-sequencing (RNA-seq) technologies have revolutionized the way agricultural biologists study gene expression as well as generated a tremendous amount of data waiting for analysis. Detecting differentially expressed genes is one of the fundamental steps in RNA-seq data analysis. In this paper, we model the count data from RNA-seq experiments with a Poisson-Gamma hierarchical model, or equivalently, a negative binomial (NB) model. We derive a semi-parametric Bayesian approach with a Dirichlet process as the prior model for the distribution of fold changes between the two treatment means. An inference strategy using Gibbs algorithm is developed for differential expression analysis. The results of several simulation studies show that our proposed method outperforms other methods including the popularly applied edgeR and DESeq methods. We also discuss an application of our method to a dataset that compares gene expression between bundle sheath and mesophyll cells in maize leaves. PMID:27570441
NASA Astrophysics Data System (ADS)
Dodonov, V. V.; Valverde, C.; Souza, L. S.; Baseia, B.
2011-10-01
The exact Wigner function of a parametrically excited quantum oscillator in a phase-sensitive amplifying/attenuating reservoir is found for initial even/odd coherent states. Studying the evolution of negativity of the Wigner function we show the difference between the “initial positivization time” (IPT), which is inversely proportional to the square of the initial size of the superposition, and the “final positivization time” (FPT), which does not depend on this size. Both these times can be made arbitrarily long in maximally squeezed high-temperature reservoirs. Besides, we find the conditions when some (small) squeezing can exist even after the Wigner function becomes totally positive.
A bifurcation analysis of boiling water reactor on large domain of parametric spaces
NASA Astrophysics Data System (ADS)
Pandey, Vikas; Singh, Suneet
2016-09-01
The boiling water reactors (BWRs) are inherently nonlinear physical system, as any other physical system. The reactivity feedback, which is caused by both moderator density and temperature, allows several effects reflecting the nonlinear behavior of the system. Stability analyses of BWR is done with a simplified, reduced order model, which couples point reactor kinetics with thermal hydraulics of the reactor core. The linear stability analysis of the BWR for steady states shows that at a critical value of bifurcation parameter (i.e. feedback gain), Hopf bifurcation occurs. These stable and unstable domains of parametric spaces cannot be predicted by linear stability analysis because the stability of system does not include only stability of the steady states. The stability of other dynamics of the system such as limit cycles must be included in study of stability. The nonlinear stability analysis (i.e. bifurcation analysis) becomes an indispensable component of stability analysis in this scenario. Hopf bifurcation, which occur with one free parameter, is studied here and it formulates birth of limit cycles. The excitation of these limit cycles makes the system bistable in the case of subcritical bifurcation whereas stable limit cycles continues in an unstable region for supercritical bifurcation. The distinction between subcritical and supercritical Hopf is done by two parameter analysis (i.e. codimension-2 bifurcation). In this scenario, Generalized Hopf bifurcation (GH) takes place, which separates sub and supercritical Hopf bifurcation. The various types of bifurcation such as limit point bifurcation of limit cycle (LPC), period doubling bifurcation of limit cycles (PD) and Neimark-Sacker bifurcation of limit cycles (NS) have been identified with the Floquet multipliers. The LPC manifests itself as the region of bistability whereas chaotic region exist because of cascading of PD. This region of bistability and chaotic solutions are drawn on the various
Parametric analysis of a cylindrical negative Poisson’s ratio structure
NASA Astrophysics Data System (ADS)
Wang, Yuanlong; Wang, Liangmo; Ma, Zheng-dong; Wang, Tao
2016-03-01
Much research related to negative Poisson’s ratio (NPR), or auxetic, structures is emerging these days. Several types of 3D NPR structure have been proposed and studied, but almost all of them had cuboid shapes, which were not suitable for certain engineering applications. In this paper, a cylindrical NPR structure was developed and researched. It was expected to be utilized in springs, bumpers, dampers and other similar applications. For the purpose of parametric analysis, a method of parametric modeling of cylindrical NPR structures was developed using MATLAB scripts. The scripts can automatically establish finite element models, invoke ABAQUS, read results etc. Subsequently the influences of structural parameters, including number of cells, number of layers and layer heights, on the uniaxial compression behavior of cylinder NPR structures were researched. This led to the conclusion that the stiffness of the cylindrical NPR structure was enhanced on increasing the number of cells and reducing the effective layer height. Moreover, small numbers of layers resulted in a late transition area of the load-displacement curve from low stiffness to high stiffness. Moreover, the middle contraction regions were more apparent with larger numbers of cells, smaller numbers of layers and smaller effective layer heights. The results indicate that the structural parameters had significant effects on the load-displacement curves and deformed shapes of cylindrical NPR structures. This paper is conducive to the further engineering applications of cylindrical NPR structures.
NASA Technical Reports Server (NTRS)
Jeffries, K. S.; Renz, D. D.
1984-01-01
A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.
A parametric analysis of performance characteristics of satellite-borne multiple-beam antennas
NASA Technical Reports Server (NTRS)
Salmasi, A. B.
1980-01-01
An analytical and empirical model is presented for parametric study of multiple beam antenna frequency reuse capacity and interbeam isolation. Two types of reflector antennas, the axisymmetric parabolic and the offset-parabolic reflectors, are utilized to demonstrate the model. The parameters of the model are introduced and their limitations are discussed in the context of parabolic reflector antennas. The model, however, is not restricted to analysis of reflector antenna performance. Results of the analyses are covered in two tables. The model parameters, objectives, and descriptions are given, multiple-beam antenna frequency reuse capacity and interbeam isolation analysis of the two types of reflectors are discussed as well as future developments of the program model.
Examining deterrence of adult sex crimes: A semi-parametric intervention time series approach
Park, Jin-Hong; Bandyopadhyay, Dipankar; Letourneau, Elizabeth
2013-01-01
Motivated by recent developments on dimension reduction (DR) techniques for time series data, the association of a general deterrent effect towards South Carolina (SC)’s registration and notification (SORN) policy for preventing sex crimes was examined. Using adult sex crime arrestee data from 1990 to 2005, the the idea of Central Mean Subspace (CMS) is extended to intervention time series analysis (CMS-ITS) to model the sequential intervention effects of 1995 (the year SC’s SORN policy was initially implemented) and 1999 (the year the policy was revised to include online notification) on the time series spectrum. The CMS-ITS model estimation was achieved via kernel smoothing techniques, and compared to interrupted auto-regressive integrated time series (ARIMA) models. Simulation studies and application to the real data underscores our model’s ability towards achieving parsimony, and to detect intervention effects not earlier determined via traditional ARIMA models. From a public health perspective, findings from this study draw attention to the potential general deterrent effects of SC’s SORN policy. These findings are considered in light of the overall body of research on sex crime arrestee registration and notification policies, which remain controversial. PMID:24795489
SFM-FDTD analysis of triangular-lattice AAA structure: Parametric study of the TEM mode
NASA Astrophysics Data System (ADS)
Hamidi, M.; Chemrouk, C.; Belkhir, A.; Kebci, Z.; Ndao, A.; Lamrous, O.; Baida, F. I.
2014-05-01
This theoretical work reports a parametric study of enhanced transmission through annular aperture array (AAA) structure arranged in a triangular lattice. The effect of the incidence angle in addition to the inner and outer radii values on the evolution of the transmission spectra is carried out. To this end, a 3D Finite-Difference Time-Domain code based on the Split Field Method (SFM) is used to calculate the spectral response of the structure for any angle of incidence. In order to work through an orthogonal unit cell which presents the advantage to reduce time and space of computation, special periodic boundary conditions are implemented. This study provides a new modeling of AAA structures useful for producing tunable ultra-compact devices.
Dong, Liang; Tang, Wen Cheng
2014-01-01
This paper presents a method to model and design servo controllers for flexible ball screw drives with dynamic variations. A mathematical model describing the structural flexibility of the ball screw drive containing time-varying uncertainties and disturbances with unknown bounds is proposed. A mode-compensating adaptive backstepping sliding mode controller is designed to suppress the vibration. The time-varying uncertainties and disturbances represented in finite-term Fourier series can be estimated by updating the Fourier coefficients through function approximation technique. Adaptive laws are obtained from Lyapunov approach to guarantee the convergence and stability of the closed loop system. The simulation results indicate that the tracking accuracy is improved considerably with the proposed scheme when the time-varying parametric uncertainties and disturbances exist.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
Parametric study using modal analysis of a bi-material plate with defects
NASA Astrophysics Data System (ADS)
Esola, S.; Bartoli, I.; Horner, S. E.; Zheng, J. Q.; Kontsos, A.
2015-03-01
Global vibrational method feasibility as a non-destructive inspection tool for multi-layered composites is evaluated using a simulated parametric study approach. A finite element model of a composite consisting of two, isotropic layers of dissimilar materials and a third, thin isotropic layer of adhesive is constructed as the representative test subject. Next, artificial damage is inserted according to systematic variations of the defect morphology parameters. A free-vibrational modal analysis simulation is executed for pristine and damaged plate conditions. Finally, resultant mode shapes and natural frequencies are extracted, compared and analyzed for trends. Though other defect types may be explored, the focus of this research is on interfacial delamination and its effects on the global, free-vibrational behavior of a composite plate. This study is part of a multi-year research effort conducted for the U.S. Army Program Executive Office - Soldier.
Ethanol production by enzymatic hydrolysis: parametric analysis of a base-case process
Isaacs, S.H.
1984-05-01
A base-case flowsheet for an enzymatic hydrolysis process is presented. Included is a parametric sensitivity analysis to identify key research issues and an assessment of this technology. The plant discussed is a large-scale facility, producing 50 million gallons of ethanol per year. The plant design is based on the process originally conceived by the US National Army Command and consists of these process steps: pretreatment; enzyme production; enzyme hydrolysis; fermentation; and distillation. The base-case design parameters are based on recent laboratory data from Lawrence Berkeley Laboratories and the University of California at Berkeley. The selling price of ethanol is used to compare variations in the base-case operating parameters, which include hydrolysis efficiencies, capital costs, enzyme production efficiencies, and enzyme recycle. 28 references, 38 figures, 8 tables.
Parametric analysis of the thermal effects on the divertor in tokamaks during plasma disruptions
Bruhn, M.L.
1988-04-01
Plasma disruptions are an ever present danger to the plasma-facing components in today's tokamak fusion reactors. This threat results from our lack of understanding and limited ability to control this complex phenomenon. In particular, severe energy deposition occurs on the divertor component of the double-null configured tokamak reactor during such disruptions. A hybrid computational model developed to estimate and graphically illustrate global thermal effects of disruptions on the divertor plates is described in detail. The quasi-two-dimensional computer code, TADDPAK (Thermal Analysis Divertor during Disruptions PAcKage), is used to conduct parametric analysis for the TIBER II Tokamak Engineering Test Reactor Design. The dependence of these thermal effects on divertor material choice, disruption pulse length, disruption pulse shape, and the characteristic thickness of the plasma scrape-off layer is investigated for this reactor design. Results and conclusions from this analysis are presented. Improvements to this model and issues that require further investigation are discussed. Cursory analysis for ITER (International Thermonuclear Experimental Reactor) is also presented in the appendix. 75 refs., 49 figs., 10 tabs.
NASA Astrophysics Data System (ADS)
Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Kwak, Byung-Joon
2013-07-01
This study aimed to quantitatively analyze data from diffusion tensor imaging (DTI) using statistical parametric mapping (SPM) in patients with brain disorders and to assess its potential utility for analyzing brain function. DTI was obtained by performing 3.0-T magnetic resonance imaging for patients with Alzheimer's disease (AD) and vascular dementia (VD), and the data were analyzed using Matlab-based SPM software. The two-sample t-test was used for error analysis of the location of the activated pixels. We compared regions of white matter where the fractional anisotropy (FA) values were low and the apparent diffusion coefficients (ADCs) were increased. In the AD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right sub-lobar insula, and right occipital lingual gyrus whereas the ADCs were significantly increased in the right inferior frontal gyrus and right middle frontal gyrus. In the VD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right limbic cingulate gyrus, and right sub-lobar caudate tail whereas the ADCs were significantly increased in the left lateral globus pallidus and left medial globus pallidus. In conclusion by using DTI and SPM analysis, we were able to not only determine the structural state of the regions affected by brain disorders but also quantitatively analyze and assess brain function.
Non-parametric seismic hazard analysis in the presence of incomplete data
NASA Astrophysics Data System (ADS)
Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush
2016-07-01
The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.
A Parametric Cycle Analysis of a Separate-Flow Turbofan with Interstage Turbine Burner
NASA Technical Reports Server (NTRS)
Marek, C. J. (Technical Monitor); Liew, K. H.; Urip, E.; Yang, S. L.
2005-01-01
Today's modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. This study focuses on a parametric cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The JTB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, linear relation between high- and low-pressure turbines, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB.
Doherty, Joshua R.; Dumont, Douglas M.; Trahey, Gregg E.; Palmeri, Mark L.
2012-01-01
Plaque rupture is the most common cause of complications such as stroke and coronary heart failure. Recent histopathological evidence suggests that several plaque features, including a large lipid core and a thin fibrous cap, are associated with plaques most at risk for rupture. Acoustic Radiation Force Impulse (ARFI) imaging, a recently developed ultrasound-based elasticity imaging technique, shows promise for imaging these features noninvasively. Clinically, this could be used to distinguish vulnerable plaques, for which surgical intervention may be required, from those less prone to rupture. In this study, a parametric analysis using Finite-Element Method (FEM) models was performed to simulate ARFI imaging of five different carotid artery plaques across a wide range of material properties. It was demonstrated that ARFI could resolve the softer lipid pool from the surrounding, stiffer media and fibrous cap and was most dependent upon the stiffness of the lipid pool component. Stress concentrations due to an ARFI excitation were located in the media and fibrous cap components. In all cases, the maximum Von Mises stress was < 1.2 kPa. In comparing these results with others investigating plaque rupture, it is concluded that while the mechanisms may be different, the Von Mises stresses imposed by ARFI are orders of magnitude lower than the stresses associated with blood pressure. PMID:23122224
NASA Astrophysics Data System (ADS)
Moradian, Zabihallah; Einstein, Herbert H.; Ballivy, Gerard
2016-03-01
Determination of the cracking levels during the crack propagation is one of the key challenges in the field of fracture mechanics of rocks. Acoustic emission (AE) is a technique that has been used to detect cracks as they occur across the specimen. Parametric analysis of AE signals and correlating these parameters (e.g., hits and energy) to stress-strain plots of rocks let us detect cracking levels properly. The number of AE hits is related to the number of cracks, and the AE energy is related to magnitude of the cracking event. For a full understanding of the fracture process in brittle rocks, prismatic specimens of granite containing pre-existing flaws have been tested in uniaxial compression tests, and their cracking process was monitored with both AE and high-speed video imaging. In this paper, the characteristics of the AE parameters and the evolution of cracking sequences are analyzed for every cracking level. Based on micro- and macro-crack damage, a classification of cracking levels is introduced. This classification contains eight stages (1) crack closure, (2) linear elastic deformation, (3) micro-crack initiation (white patch initiation), (4) micro-crack growth (stable crack growth), (5) micro-crack coalescence (macro-crack initiation), (6) macro-crack growth (unstable crack growth), (7) macro-crack coalescence and (8) failure.
Parametric and nonparametric analysis of life courses: an application to family formation patterns.
Bonetti, Marco; Piccarreta, Raffaella; Salford, Gaia
2013-06-01
We discuss a unified approach to the description and explanation of life course patterns represented as sequences of states observed in discrete time. In particular, we study life course data collected as part of the Dutch Fertility and Family Surveys (FFS) to learn about the family formation behavior of 1,897 women born between 1953 and 1962. Retrospective monthly data were available on each 18- to 30-year-old woman living either with or without children as single, married, or cohabiting. We first study via a nonparametric approach which factors explain the pairwise dissimilarities observed between life courses. Permutation distribution inference allows for the study of the statistical significance of the effect of a set of covariates of interest. We then develop a parametric model for the sequence-generating process that can be used to describe state transitions and durations conditional on covariates and conditional on having observed an initial segment of the trajectory. Fitting of the proposed model and the corresponding model selection process are based on the observed data likelihood. We discuss the application of the methods to the FFS.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
NASA Astrophysics Data System (ADS)
Ozden, Ender; Tari, Ilker
2016-02-01
A Polymer Electrolyte Membrane (PEM) fuel cell is numerically investigated both as fresh and as degraded with the help of observed degradation patterns reported in the literature. The fresh fuel cell model is validated and verified with the data from the literature. Modifying the model by varying the parameters affected by degradation, a degraded PEM fuel cell model is created. The degraded fuel cell is parametrically analyzed by using a commercial Computational Fluid Dynamics (CFD) software. The investigated parameters are the membrane equivalent weight, the Catalyst Layer (CL) porosity and viscous resistance, the Gas Diffusion Layer (GDL) porosity and viscous resistance, and the bipolar plate contact resistance. It is shown for the first time that PEM fuel cell overall degradation can be numerically estimated by combining experimental data from degraded individual components. By comparing the simulation results for the fresh and the degraded PEM fuel cells for two years of operation, it is concluded that the effects of overall degradation on cell potential is significant - estimated to be 17% around the operating point of the fuel cell at 0.95 V open circuit voltage and 70 °C operating temperature.
Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr
2015-01-13
The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the
Adaptive neural control for a class of nonlinearly parametric time-delay systems.
Ho, Daniel W C; Li, Junmin; Niu, Yugang
2005-05-01
In this paper, an adaptive neural controller for a class of time-delay nonlinear systems with unknown nonlinearities is proposed. Based on a wavelet neural network (WNN) online approximation model, a state feedback adaptive controller is obtained by constructing a novel integral-type Lyapunov-Krasovskii functional, which also efficiently overcomes the controller singularity problem. It is shown that the proposed method guarantees the semiglobal boundedness of all signals in the adaptive closed-loop systems. An example is provided to illustrate the application of the approach.
Non-parametric causal assessment in deep-time geological records
NASA Astrophysics Data System (ADS)
Agasøster Haaga, Kristian; Diego, David; Brendryen, Jo; Hannisdal, Bjarte
2016-04-01
The interplay between climate variables and the timing of their feedback mechanisms are typically investigated using fully coupled climate system models. However, as we delve deeper into the geological past, mechanistic process models become increasingly uncertain, making nonparametric approaches more attractive. Here we explore the use of two conceptually different methods for nonparametric causal assessment in palaeoenvironmental archives of the deep past: convergent cross mapping (CCM) and information transfer (IT). These methods have the potential to capture interactions in complex systems even when data are sparse and noisy, which typically characterises geological proxy records. We apply these methods to proxy time series that capture interlinked components of the Earth system at different temporal scales, and quantify both the interaction strengths and the feedback lags between the variables. Our examples include the linkage between the ecological prominence of common planktonic species to oceanographic changes over the last ~65 million years, and global interactions and teleconnections within the climate system during the last ~800,000 years.
Time- and power-dependent operation of a parametric spin-wave amplifier
Brächer, T.; Heussner, F.; Pirro, P.; Fischer, T.; Geilen, M.; Heinz, B.; Lägel, B.; Serga, A. A.; Hillebrands, B.
2014-12-08
We present the experimental observation of the localized amplification of externally excited, propagating spin waves in a transversely in-plane magnetized Ni{sub 81}Fe{sub 19} magnonic waveguide by means of parallel pumping. By employing microfocussed Brillouin light scattering spectroscopy, we analyze the dependency of the amplification on the applied pumping power and on the delay between the input spin-wave packet and the pumping pulse. We show that there are two different operation regimes: At large pumping powers, the spin-wave packet needs to enter the amplifier before the pumping is switched on in order to be amplified while at low powers the spin-wave packet can arrive at any time during the pumping pulse.
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
A parametric analysis of lithospheric imaging by Full-Waveform Inversion of teleseismic body-waves
NASA Astrophysics Data System (ADS)
Beller, Stephen; Monteiller, Vadim; Operto, Stéphane; Nolet, Guust; Virieux, Jean
2015-04-01
With the deployment of dense seismic arrays and the continuous growth of computing facilities, full-waveform inversion (FWI) of teleseismic data has become a method of choice for 3D high-resolution lithospheric imaging. FWI is a local optimization problem that seeks to estimate Earth's elastic properties by iteratively minimizing the misfit function between observed and modeled seismograms. Recent investigations have shown the feasibility of such local inversions by injecting a pre-computed global wavefield at the edges of the lithospheric target. In this study, we present all the methodological ingredients needed for the application of FWI to lithospheric data. The global wavefield, which is computed in an axisymmetric global earth with AxiSEM, is injected in the lithospheric target by the so-called total-field/scattered-field method. The inversion, which is implemented with an adjoint formalism, is performed following a multiscale approach, proceeding hierarchically from low to high frequencies. We further perform a parametric analysis in a realistic model representative of the Western Alps. This analysis mainly focus on the FWI sensitivity to the source characteristics. One key issue is the estimation of the temporal source excitation, as there might be some trade-off between the source estimation and the subsurface update. We also investigate the imprint of the sources repartition on the spatial resolution of the imaging, the FWI sensitivity to the accuracy of the starting model and the effects of considering a complex topography. Seismic modeling in the FWI models allows us to assess which parts of the teleseismic wavefield significantly contribute to the imaging.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
2009-08-13
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
Quantum analysis of the nondegenerate optical parametric oscillator with injected signal
Coutinho dos Santos, B.; Dechoum, K.; Khoury, A.Z.; Silva, L.F. da; Olsen, M.K.
2005-09-15
In this paper we study the nondegenerate optical parametric oscillator with injected signal, both analytically and numerically. We develop a perturbation approach which allows us to find approximate analytical solutions, starting from the full equations of motion in the positive-P representation. We demonstrate the regimes of validity of our approximations via comparison with the full stochastic results. We find that, with reasonably low levels of injected signal, the system allows for demonstrations of quantum entanglement and the Einstein-Podolsky-Rosen paradox. In contrast to the normal optical parametric oscillator operating below threshold, these features are demonstrated with relatively intense fields.
NASA Astrophysics Data System (ADS)
McKenna, C.; Berx, B.; Austin, W. E. N.
2016-01-01
The Faroe-Shetland Channel (FSC) is an important conduit for the poleward flow of Atlantic water towards the Nordic Seas and, as such, it plays an integral part in the Atlantic's thermohaline circulation. Mixing processes in the FSC are thought to result in an exchange of properties between the channel's inflow and outflow, with wider implications for this circulation; the nature of this mixing in the FSC is, however, uncertain. To constrain this uncertainty, we used a novel empirical method known as Parametric Optimum Multi-Parameter (POMP) analysis to objectively quantify the distribution of water masses in the channel in May 2013. This was achieved by using a combination of temperature and salinity measurements, as well as recently available nutrient and δ18O measurements. The outcomes of POMP analysis are in good agreement with established literature and demonstrate the benefits of representing all five water masses in the FSC. In particular, our results show the recirculation of Modified North Atlantic Water in the surface layers, and the pathways of Norwegian Sea Arctic Intermediate Water and Norwegian Sea Deep Water from north to south for the first time. In a final step, we apply the mixing fractions from POMP analysis to decompose the volume transport through the FSC by water mass. Despite a number of caveats, our study suggests that improved estimates of the volume transport of Atlantic inflow towards the Arctic and, thus, the associated poleward fluxes of salt and heat are possible. A new prospect to more accurately monitor the strength of the FSC branch of the thermohaline circulation emerges from this study.
[Non-Parametric Analysis of Radiation Risks of Mortality among Chernobyl Clean-Up Workers].
Gorsky, A I; Maksioutov, M A; Tumanov, K A; Shchukina, N V; Chekin, S Yu; Ivanov, V K
2016-01-01
Analysis of the relationship between dose and mortality from cancer and circulation diseases in the cohort of Chernobyl clean-up workers based on the data from the National Radiation and Epidemiological Registry was performed. Medical and dosimetry information on the clean-up workers, males, who got radiation doses from April 26, 1986 to April 26, 1987, which was accumulated from 1992 to 2012, was used for the analysis. The total size of the cohort was 42929 people, 12731 deaths were registered in the cohort, among them 1893 deaths from solid cancers and 5230 deaths were from circulation diseases. An average age of the workers was 39 years in 1992 and the mean dose was 164 mGy. The dose-effect relationship was estimated with the use of non-parametric analysis of survival with regard to concurrence of risks of mortality. The risks were estimated in 6 dose groups of similar size (1-70, 70-130, 130-190, 190-210, 210-230 and.230-1000 mGy). The group "1-70 mGy" was used as control. Estimated dose-effect relationship related to cancers and circulation diseases is described approximately with a linear model, coefficient of determination (the proportion of variability explained by the linear model) for cancers was 23-25% and for circulation diseases - 2-13%. The slope coefficient of the dose-effect relationship normalized to 1 Gy for the ratio of risks for cancers in the linear model was 0.47 (95% CI: -0.77, 1.71), and for circulation diseases it was 0.22 (95% CI: -0.58, 1.02). Risks coefficient (slope coefficient of excess mortality at a dose of 1 Gy) for solid cancers was 1.94 (95% CI: - 3.10, 7.00) x 10(-2) and for circulation diseases it was 0.67 (95% CI: -9.61, 11.00) x 10(-2). 137 deaths from radiation-induced cancers and 47 deaths from circulation diseases were registered during a follow up period. PMID:27534064
[Non-Parametric Analysis of Radiation Risks of Mortality among Chernobyl Clean-Up Workers].
Gorsky, A I; Maksioutov, M A; Tumanov, K A; Shchukina, N V; Chekin, S Yu; Ivanov, V K
2016-01-01
Analysis of the relationship between dose and mortality from cancer and circulation diseases in the cohort of Chernobyl clean-up workers based on the data from the National Radiation and Epidemiological Registry was performed. Medical and dosimetry information on the clean-up workers, males, who got radiation doses from April 26, 1986 to April 26, 1987, which was accumulated from 1992 to 2012, was used for the analysis. The total size of the cohort was 42929 people, 12731 deaths were registered in the cohort, among them 1893 deaths from solid cancers and 5230 deaths were from circulation diseases. An average age of the workers was 39 years in 1992 and the mean dose was 164 mGy. The dose-effect relationship was estimated with the use of non-parametric analysis of survival with regard to concurrence of risks of mortality. The risks were estimated in 6 dose groups of similar size (1-70, 70-130, 130-190, 190-210, 210-230 and.230-1000 mGy). The group "1-70 mGy" was used as control. Estimated dose-effect relationship related to cancers and circulation diseases is described approximately with a linear model, coefficient of determination (the proportion of variability explained by the linear model) for cancers was 23-25% and for circulation diseases - 2-13%. The slope coefficient of the dose-effect relationship normalized to 1 Gy for the ratio of risks for cancers in the linear model was 0.47 (95% CI: -0.77, 1.71), and for circulation diseases it was 0.22 (95% CI: -0.58, 1.02). Risks coefficient (slope coefficient of excess mortality at a dose of 1 Gy) for solid cancers was 1.94 (95% CI: - 3.10, 7.00) x 10(-2) and for circulation diseases it was 0.67 (95% CI: -9.61, 11.00) x 10(-2). 137 deaths from radiation-induced cancers and 47 deaths from circulation diseases were registered during a follow up period.
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Sava, H; Durand, L G; Cloutier, G
1999-05-01
To achieve an accurate estimation of the instantaneous turbulent velocity fluctuations downstream of prosthetic heart valves in vivo, the variability of the spectral method used to measure the mean frequency shift of the Doppler signal (i.e. the Doppler velocity) should be minimised. This paper investigates the performance of various short-time spectral parametric methods such as the short-time Fourier transform, autoregressive modelling based on two different approaches, autoregressive moving average modelling based on the Steiglitz-McBride method, and Prony's spectral method. A simulated Doppler signal was used to evaluate the performance of the above mentioned spectral methods and Gaussian noise was added to obtain a set of signals with various signal-to-noise ratios. Two different parameters were used to evaluate the performance of each method in terms of variability and accurate matching of the theoretical Doppler mean instantaneous frequency variation within the cardiac cycle. Results show that autoregressive modelling outperforms the other investigated spectral techniques for window lengths varying between 1 and 10 ms. Among the autoregressive algorithms implemented, it is shown that the maximum entropy method based on a block data processing technique gives the best results for a signal-to-noise ratio of 20 dB. However, at 10 and 0 dB, the Levinson-Durbin algorithm surpasses the performance of the maximum entropy method. It is expected that the intrinsic variance of the spectral methods can be an important source of error for the estimation of the turbulence intensity. The range of this error varies from 0.38% to 24% depending on the parameters of the spectral method and the signal-to-noise ratio. PMID:10505377
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…
Strauch, K; Fimmers, R; Kurz, T; Deichmann, K A; Wienker, T F; Baur, M P
2000-01-01
We present two extensions to linkage analysis for genetically complex traits. The first extension allows investigators to perform parametric (LOD-score) analysis of traits caused by imprinted genes-that is, of traits showing a parent-of-origin effect. By specification of two heterozygote penetrance parameters, paternal and maternal origin of the mutation can be treated differently in terms of probability of expression of the trait. Therefore, a single-disease-locus-imprinting model includes four penetrances instead of only three. In the second extension, parametric and nonparametric linkage analysis with two trait loci is formulated for a multimarker setting, optionally taking imprinting into account. We have implemented both methods into the program GENEHUNTER. The new tools, GENEHUNTER-IMPRINTING and GENEHUNTER-TWOLOCUS, were applied to human family data for sensitization to mite allergens. The data set comprises pedigrees from England, Germany, Italy, and Portugal. With single-disease-locus-imprinting MOD-score analysis, we find several regions that show at least suggestive evidence for linkage. Most prominently, a maximum LOD score of 4.76 is obtained near D8S511, for the English population, when a model that implies complete maternal imprinting is used. Parametric two-trait-locus analysis yields a maximum LOD score of 6.09 for the German population, occurring exactly at D4S430 and D18S452. The heterogeneity model specified for analysis alludes to complete maternal imprinting at both disease loci. Altogether, our results suggest that the two novel formulations of linkage analysis provide valuable tools for genetic mapping of multifactorial traits. PMID:10796874
Rasch analysis for the evaluation of rank of student response time in multiple choice examinations.
Thompson, James J; Yang, Tong; Chauvin, Sheila W
2013-01-01
The availability of computerized testing has broadened the scope of person assessment beyond the usual accuracy-ability domain to include response time analyses. Because there are contexts in which speed is important, e.g. medical practice, it is important to develop tools by which individuals can be evaluated for speed. In this paper, the ability of Rasch measurement to convert ordinal nonparametric rankings of speed to measures is examined and compared to similar measures derived from parametric analysis of response times (pace) and semi-parametric logarithmic time-scaling procedures. Assuming that similar spans of the measures were used, non-parametric methods of raw ranking or percentile-ranking of persons by questions gave statistically acceptable person estimates of speed virtually identical to the parametric or semi-parametric methods. Because no assumptions were made about the underlying time distributions with ranking, generality of conclusions was enhanced. The main drawbacks of the non-parametric ranking procedures were the lack of information on question duration and the overall assignment by the model of variance to the person by question interaction.
Rasch analysis for the evaluation of rank of student response time in multiple choice examinations.
Thompson, James J; Yang, Tong; Chauvin, Sheila W
2013-01-01
The availability of computerized testing has broadened the scope of person assessment beyond the usual accuracy-ability domain to include response time analyses. Because there are contexts in which speed is important, e.g. medical practice, it is important to develop tools by which individuals can be evaluated for speed. In this paper, the ability of Rasch measurement to convert ordinal nonparametric rankings of speed to measures is examined and compared to similar measures derived from parametric analysis of response times (pace) and semi-parametric logarithmic time-scaling procedures. Assuming that similar spans of the measures were used, non-parametric methods of raw ranking or percentile-ranking of persons by questions gave statistically acceptable person estimates of speed virtually identical to the parametric or semi-parametric methods. Because no assumptions were made about the underlying time distributions with ranking, generality of conclusions was enhanced. The main drawbacks of the non-parametric ranking procedures were the lack of information on question duration and the overall assignment by the model of variance to the person by question interaction. PMID:24064578
Aerodynamic parametric studies and sensitivity analysis for rotor blades in axial flight
NASA Technical Reports Server (NTRS)
Chiu, Y. D.; Peters, David A.
1991-01-01
The analytical capability is offered for aerodynamic parametric studies and sensitivity analyses of rotary wings in axial flight by using a 3D undistorted wake model in curved lifting line theory. The governing equations are solved by both the Multhopp Interpolation technique and the Vortex Lattice method. The singularity from the bound vortices is eliminated through the Hadamard's finite part concept. Good numerical agreement between both analytical methods and finite differences methods are found. Parametric studies were made to assess the effects of several shape variables on aerodynamic loads. It is found, e.g., that a rotor blade with out-of-plane and inplane curvature can theoretically increase lift in the inboard and outboard regions respectively without introducing an additional induced drag.
NASA Astrophysics Data System (ADS)
Guoyan, Liu; Kun, Gao; Xuefeng, Liu; Guoqiang, Ni
2016-10-01
We report the simulation and measurement results of near field spatial scattering spectra around nanoparticles. Our measurement and simulations results have indicated that Parametric Indirect Microscopic Imaging can image the near field spatial scattering to a much larger distance from the scattering source of the particle under measurement whereas this part of spatial scattering was lost in the conventional microscopy. Both FDTD modeling and measurement provided evidence that parameters of indirect optical wave vector have higher sensitivity to near field scattering.
Neelakantan, S; Veng-Pedersen, P
2005-11-01
A novel numerical deconvolution method is presented that enables the estimation of drug absorption rates under time-variant disposition conditions. The method involves two components. (1) A disposition decomposition-recomposition (DDR) enabling exact changes in the unit impulse response (UIR) to be constructed based on centrally based clearance changes iteratively determined. (2) A non-parametric, end-constrained cubic spline (ECS) input response function estimated by cross-validation. The proposed DDR-ECS method compensates for disposition changes between the test and the reference administrations by using a "beta" clearance correction based on DDR analysis. The representation of the input response by the ECS method takes into consideration the complex absorption process and also ensures physiologically realistic approximations of the response. The stability of the new method to noisy data was evaluated by comprehensive simulations that considered different UIRs, various input functions, clearance changes and a novel scaling of the input function that includes the "flip-flop" absorption phenomena. The simulated input response was also analysed by two other methods and all three methods were compared for their relative performances. The DDR-ECS method provides better estimation of the input profile under significant clearance changes but tends to overestimate the input when there were only small changes in the clearance.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony
2009-01-01
Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.
Tavecchio, Michele; Simone, Matteo; Bernasconi, Sergio; Tognon, Gianluca; Mazzini, Giuliano; Erba, Eugenio
2008-01-01
TO-PRO-3 iodide (TP3), a monomeric cyanine nucleic acid stain with a peak absorbance at 642 nm and emission at 661 nm, is best excited by a helium-neon (HeNe) laser (633nm). It was tested in monocytes and different cell lines under conditions of different fixatives, dye concentrations, labeling kinetics and RNAse concentrations for mono-, bi- and tri-parametric flow cytometric cell cycle analysis to establish the best protocol for DNA analysis in terms of G1 peak CV, G2/G1 ratio and minimal amount of debris. A linear increase in G1 peak position was found from 0.1 to 2 microM TP3 concentrations. Fixatives 70% ethanol or 1% methanol-free formaldehyde, followed by 70% ethanol, resulted in the best DNA histograms. Although different protocols were found to be cell-type specific, in general, excellent results were obtained with 30 min incubation with 0.5 microM TP3 plus RNAse in almost all cell lines tested. These data show that TP3 is an alternative method to propidium iodide (PI), the most commonly used DNA-specific probe in flow cytometry. The most important advantage of using TP3 in combination with other fluorochromes, such as fluorescein isothiocyanate (FITC) or phycoerythrin (PE) in bi- or tri-parametric flow cytometric analysis, is that there is no need for fluorescence compensation for the TP3 signals. PMID:18160099
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Non-parametric trend analysis of water quality data of rivers in Kansas
Yu, Y.-S.; Zou, S.; Whittemore, D.
1993-01-01
Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous. ?? 1993.
Arisholm, Gunnar
2007-05-14
Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.
Eberhard, B.J.; Harbour, J.R.; Plodinec, M.J.
1994-06-01
As part of the DWPF Startup Test Program, a parametric study has been performed to determine a range of welder operating parameters which will produce acceptable final welds for canistered waste forms. The parametric window of acceptable welds defined by this study is 90,000 {plus_minus} 15,000 lb of force, 248,000 {plus_minus} 22,000 amps of current, and 95 {plus_minus} 15 cycles (@ 60 cops) for the time of application of the current.
Wolfenstein parametrization reexamined
Xing, Z. )
1995-04-01
The Wolfenstein parametrization of the 3[times]3 Kobayashi-Maskawa (KM) matrix [ital V] is modified by keeping its unitarity up to an accuracy of [ital O]([lambda][sup 6]). This modification can self-consistently lead to the off-diagonal asymmetry of [ital V], [vert bar][ital V][sub [ital i][ital j
Vigan, Marie; Stirnemann, Jérôme; Mentré, France
2014-05-01
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%. PMID:24570340
Robust non-parametric one-sample tests for the analysis of recurrent events.
Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia
2010-12-30
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population.
A parametric shell analysis of the shuttle 51-L SRB AFT field joint
NASA Technical Reports Server (NTRS)
Davis, Randall C.; Bowman, Lynn M.; Hughes, Robert M., IV; Jackson, Brian J.
1990-01-01
Following the Shuttle 51-L accident, an investigation was conducted to determine the cause of the failure. Investigators at the Langley Research Center focused attention on the structural behavior of the field joints with O-ring seals in the steel solid rocket booster (SRB) cases. The shell-of-revolution computer program BOSOR4 was used to model the aft field joint of the solid rocket booster case. The shell model consisted of the SRB wall and joint geometry present during the Shuttle 51-L flight. A parametric study of the joint was performed on the geometry, including joint clearances, contact between the joint components, and on the loads, induced and applied. In addition combinations of geometry and loads were evaluated. The analytical results from the parametric study showed that contact between the joint components was a primary contributor to allowing hot gases to blow by the O-rings. Based upon understanding the original joint behavior, various proposed joint modifications are shown and analyzed in order to provide additional insight and information. Finally, experimental results from a hydro-static pressurization of a test rocket booster case to study joint motion are presented and verified analytically.
Parametric Analysis of Cyclic Phase Change and Energy Storage in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Hall, Carsie A., III; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.
1997-01-01
A parametric study on cyclic melting and freezing of an encapsulated phase change material (PCM), integrated into a solar heat receiver, has been performed. The cyclic nature of the present melt/freeze problem is relevant to latent heat thermal energy storage (LHTES) systems used to power solar Brayton engines in microgravity environments. Specifically, a physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) project was developed. Multi-conjugate effects such as the convective fluid flow of a low-Prandtl-number fluid, coupled with thermal conduction in the phase change material, containment tube and working fluid conduit were accounted for in the model. A single-band thermal radiation model was also included to quantify reradiative energy exchange inside the receiver and losses through the aperture. The eutectic LiF-CaF2 was used as the phase change material (PCM) and a mixture of He/Xe was used as the working fluid coolant. A modified version of the computer code HOTTube was used to generate results in the two-phase regime. Results indicate that parametric changes in receiver gas inlet temperature and receiver heat input effects higher sensitivity to changes in receiver gas exit temperatures.
NASA Astrophysics Data System (ADS)
Cosmidis, J.; Heggy, E.; Clifford, S. M.
2007-12-01
Laboratory dielectric characterizations of Ice-dust mixtures are crucial for the quantitative analysis of radar sounding data as for the case of the MARSIS and SHARAD experiments. Understanding the range of the dielectric properties of the Martian northmen Polar layer deposits as well as their geographical an vertical distribution result in a better topographical mapping of the basement material below the northern polar cap and help constrain the ambiguities on the identification of layering and any potential subglaciar melting. In order to achieve this task, we constructed first order modeled maps of the surface dielectric properties oh the NPLD. We first used the recent Mars Global Surveyor Thermal Emission Spectrometer (TES) thermal inertia observations in order to derive a map of the dust mass fraction in the ice at the top of the permanent cap. Then we used parametric laboratory measurements of the dielectric properties of Martian polar ice analogs with various temperatures, radar frequencies and mass fractions and compositions of dust in order to obtain the parametric dielectric maps. Thermal inertia maps have been derived from recent TES observations of the surface temperatures of Mars taken over three Mars-years from orbit 1583 to 24346. Laboratory dielectric characterization of ice-dust mixtures has been performed using TES dust calibration samples provided by the ARES group at NASA JSC. Our Maps suggest that surface dielectric properties of the northern Polar cap ranges from 2.72 to 3.23 in the 2-20 MHz band for a dust inclusion typical of Martian basalt. Parametric maps of loss tangent, penetration depth for several dust types will be presented at the conference.
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-15
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300–1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10{sup −5}M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-01
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300-1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10(-5)M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process. PMID:26724012
Open cycle OTEC thermal-hydraulic systems analysis and parametric studies
NASA Astrophysics Data System (ADS)
Patsons, B.; Bharathan, D.; Althof, J.
1984-06-01
An analytic thermohydraulic systems model of the power cycle an seawater supply systems for an open cycle ocean thermal energy conversion (OTEC) plant has been developed that allows ready examination of the effects of system and component operating points on plant size and parasitic power requirements. This paper presents the results of three parametric studies on the effects of system temperature distribution, plant gross electric capacity, and the allowable seawater velocity in the supply and discharge pipes. The paper also briefly discusses the assumptions and equations used in the model and the state-of-the-art component limitations. The model provides a useful tool for an OTEC plant designer to evaluate system trade-offs and define component interactions and performance.
A parametric study of supersonic laminar flow for swept wings using linear stability analysis
NASA Technical Reports Server (NTRS)
Cummings, Russell M.; Garcia, Joseph A.; Tu, Eugene L.
1995-01-01
A parametric study to predict the extent of laminar flow on the upper surface of a generic swept-back wing (NACA 64A010 airfoil section) at supersonic speeds was conducted. The results were obtained by using surface pressure predictions from an Euler/Navier-Stokes computational fluid dynamics code coupled with a boundary layer code, which predicts detailed boundary layer profiles, and finally with a linear stability code to determine the extent of laminar flow. The parameters addressed are Reynolds number, angle of attack, and leading-edge wing sweep. The results of this study show that an increase in angle of attack, for specific Reynolds numbers, can actually delay transition. Therefore, higher lift capability, caused by the increased angle of attack, as well as a reduction in viscous drag due to the delay in transition is possible for certain flight conditions.
NASA Technical Reports Server (NTRS)
To, Wing H.
2005-01-01
Quantum optical experiments require all the components involved to be extremely stable relative to each other. The stability can be "measured" by using an interferometric experiment. A pair of coherent photons produced by parametric down-conversion could be chosen to be orthogonally polarized initially. By rotating the polarization of one of the wave packets, they can be recombined at a beam splitter such that interference will occur. Theoretically, the interference will create four terms in the wave function. Two terms with both photons going to the same detector, and two terms will have the photons each going to different detectors. However, the latter will cancel each other out, thus no photons will arrive at the two detectors simultaneously under ideal conditions. The stability Of the test-bed can then be inferred by the dependence of coincidence count on the rotation angle.
Silver-Thorn, M B; Childress, D S
1996-07-01
A finite element (FE) model of the below-knee residual limb and prosthetic socket was created to investigate the effects of parameter variations on the interface stress distribution during static stance. This model was based upon geometric approximations of anthropometric residual limb geometry. The model was not specific to an individual with amputation, but could be scaled to approximate the limb of a particular subject. Parametric analyses were conducted to investigate the effects of prosthetic socket design and residual limb geometry on the residual limb/prosthetic socket interface stresses. Behavioral trends were illustrated via sensitivity analysis. The results of the parametric analyses indicate that the residual limb/prosthetic socket interface stresses are affected by variations in both prosthetic design and residual limb geometry. Specifically, the analyses indicate: 1) the residual limb/prosthetic liner interface pressures are relatively insensitive to the socket stiffness; 2) the stiffness of the prosthetic liner influences the interface stress distribution for both the unrectified and patellar-tendon-bearing (PTB) rectified models-the external load state appears to influence the interface pressure distribution, while the prosthetic socket rectification appears to influence the interface shear stress distribution; 3) the interface pressures are very sensitive to the prosthetic rectification; 4) the shape and relative bulk of soft tissue may significantly influence the interface pressure distribution; 5) the interface pressure distribution is also influenced by the residual limb length; and 6) the stiffness/compliance of the residual limb soft tissues may significantly alter the interface pressure distribution.
Parametric modeling for quantitative analysis of pulmonary structure to function relationships
NASA Astrophysics Data System (ADS)
Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.
2005-04-01
While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.
Permutations and time series analysis.
Cánovas, Jose S; Guillamón, Antonio
2009-12-01
The main aim of this paper is to show how the use of permutations can be useful in the study of time series analysis. In particular, we introduce a test for checking the independence of a time series which is based on the number of admissible permutations on it. The main improvement in our tests is that we are able to give a theoretical distribution for independent time series.
NASA Astrophysics Data System (ADS)
Alexander, Rafael N.; Wang, Pei; Sridhar, Niranjan; Chen, Moran; Pfister, Olivier; Menicucci, Nicolas C.
2016-09-01
One-way quantum computing is experimentally appealing because it requires only local measurements on an entangled resource called a cluster state. Record-size, but nonuniversal, continuous-variable cluster states were recently demonstrated separately in the time and frequency domains. We propose to combine these approaches into a scalable architecture in which a single optical parametric oscillator and simple interferometer entangle up to (3 ×103 frequencies) × (unlimited number of temporal modes) into a computationally universal continuous-variable cluster state. We introduce a generalized measurement protocol to enable improved computational performance on this entanglement resource.
NASA Astrophysics Data System (ADS)
Schneberk, D.
1985-07-01
The analysis component of the Enrichment Diagnostic System (EDS) developed for the Atomic Vapor Laser Isotope Separation Program (AVLIS) at Lawrence Livermore National Laboratory (LLNL) is described. Four different types of analysis are performed on data acquired through EDS: (1) absorption spectroscopy on laser-generated spectral lines, (2) mass spectrometer analysis, (3) general purpose waveform analysis, and (4) separation performance calculations. The information produced from this data includes: measures of particle density and velocity, partial pressures of residual gases, and overall measures of isotope enrichment. The analysis component supports a variety of real-time modeling tasks, a means for broadcasting data to other nodes, and a great degree of flexibility for tailoring computations to the exact needs of the process. A particular data base structure and program flow is common to all types of analysis. Key elements of the analysis component are: (1) a fast access data base which can configure all types of analysis, (2) a selected set of analysis routines, (3) a general purpose data manipulation and graphics package for the results of real time analysis.
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.; Yool, A.
2014-09-01
Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to compensate for missing biological complexity. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA) coupled with a widely-used global ocean model (NEMO). A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of target model output. In general, chlorophyll records at a representative array of oceanic sites
NASA Astrophysics Data System (ADS)
Lecomte, E.; Le Pourhiet, L.; Lacombe, O.; Jolivet, L.
2009-04-01
Widespread occurrences of low angle normal faults have been described within the extending continental crust since their discovery in the Basin and Range province. Although a number of field observations suggest that sliding may occur at very shallow dip in the brittle field, the seismic activity related to such normal faults is nearly inexistent and agrees with the locking angle of 30° predicted from Andersonian fault mechanics associated with Byerlee's law. To understand this apparent contradiction, we have introduced Mohr Coulomb plastic flow rule within the inherited low-angle faults where former studies were limited to a yield criterion. The fault is considered as a pre existing compacting or dilating plane with a shallow dip (0-45°) embedded in a brittle media. Following Anderson's theory, we assume that the maximal principal stress is vertical and equal to the lithostatic pressure. This approximation may not be true for small faults but it holds for large detachment faults where associated joints are generally vertical. With this model, we can predict not only whether new brittle features forms in the surrounding of the low angle normal faults but also the complete stress-strain evolution both within the faults and in its surrounding. Moreover, the introduction of a flow rule within the fault allows brittle strain to occur on very badly oriented faults (dip < 30°) before yielding occurs in the surrounding medium. After performing a full parametric study, we find that the reactivation of low angle normal faults depends primarily on the friction angle of the fault material and the ratio of the cohesion between the shear band and its surrounding. Our model is therefore in good agreement with previous simpler models, and the locking angles obtained differ in most cases by only 2 or 3° from previous yield criteria-based approaches which did explain most of the data especially the repartition of focal mechanisms worldwide. However, we find that in some cases
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H; Maurits, Natasha M
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration
Han, Xiujing; Bi, Qinsheng; Ji, Peng; Kurths, Jürgen
2015-07-01
We present a general method for analyzing mixed-mode oscillations (MMOs) in parametrically and externally excited systems with two low excitation frequencies (PEESTLEFs) for the case of arbitrary m:n relation between the slow frequencies of excitations. The validity of the approach has been demonstrated using the equations of Duffing and van der Pol, separately. Our study shows that, by introducing a slow variable and finding the relation between the slow variable and the slow excitations, PEESTLEFs can be transformed into a fast-slow form with a single slow variable and therefore MMOs observed in PEESTLEFs can be understood by the classical machinery of fast subsystem analysis of the transformed fast-slow system. PMID:26274251
NASA Astrophysics Data System (ADS)
Wang, Ning; Li, Yongmin
2016-01-01
We developed a quantum analysis of the nondegenerate optical parametric oscillator (NOPO) with unequally injected signal and idler. Both the steady-state output field and the two-mode quantum correlation spectrum are investigated under the condition of different injected idler-to-signal ratios (ISRs) and the relative phase between the pump and the injected seed. It is found that when the seed is injected through the output coupler, the NOPO allows for the robust generation of two-mode quantum entanglement even if the relative phase is free running and the ISR is as high as 0.7. At the specific relative phase of zero, a high degree of entanglement can exist across a whole range of ISRs. An experimental study of the NOPO with unequal seeds is presented, and the observed results verify the theoretical predictions.
NASA Astrophysics Data System (ADS)
Allan, Alasdair
2014-06-01
FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.
NASA Technical Reports Server (NTRS)
Housner, J. M.; Stein, M.
1975-01-01
A computer program is presented which was developed for the combined compression and shear of stiffened variable thickness orthotropic composite panels on discrete springs: boundary conditions are general and include elastic boundary restraints. Buckling solutions are obtained by using a newly developed trigonometric finite difference procedure which improves the solution convergence rate over conventional finite difference methods. The classical general shear buckling results which exist only for simply supported panels over a limited range of orthotropic properties, were extended to the complete range of these properties for simply supported panels and, in addition, to the complete range of orthotropic properties for clamped panels. The program was also applied to parametric studies which examine the effect of filament orientation upon the buckling of graphite-epoxy panels. These studies included an examination of the filament orientations which yield maximum shear or compressive buckling strength for panels having all four edges simply supported or clamped over a wide range of aspect ratios. Panels with such orientations had higher buckling loads than comparable, equal weight, thin skinned aluminum panels. Also included among the parameter studies were examinations of combined axial compression and shear buckling and examinations of panels with rotational elastic edge restraints.
A Novel Parametric Analysis of a Conventional Silica-Gel Water Adsorption Chiller
NASA Astrophysics Data System (ADS)
Alam, K. C. A.; Saha, B. B.; Akisawa, Atsushi; Kashiwagi, Takao
A conventional adsorption chiller with silica gel as adsorbent and water as adsorbate has been analyzed numerically. In the present study, a non-dimensional simulation model has been presented and a set of non-dimensional parameters for conventional adsorption chiller has been derived. The results obtained by simulation method are confirmed by the experimental results. Simulation results show that switching speed is most influential parameter and there is an optimum switching speed for cooling capacity and COP. Results also show that system performance (cooling capacity and COP) is strongly affected by the number of transfer unit of adsorber/desorber, NTUa due to severe sensible heating/cooling requirements. The model is somewhat sensitive to the number of transfer unit of evaporator, NTUe. The number of transfer unit of condenser, NTUc is the least sensitive parameter. Finally, an investigation is made parametrically to obtain the optimum value of switching speed and the number of transfer unit, NTU of different component. The present model can be employed to analyze and to optimize the adsorption cooling/heat pump system.
Multi-parametric heart rate analysis in premature babies exposed to sudden infant death syndrome.
Lucchini, Maristella; Signorini, Maria G; Fifer, William P; Sahni, Rakhesh
2014-01-01
Severe premature babies present a risk profile higher than the normal population. Reasons are related to the incomplete development of physiological systems that support baby's life. Heart Rate Variability (HRV) analysis can help the identification of distress conditions as it is sensitive to Autonomic Nervous System (ANS) behavior. This paper presents results obtained in 35 babies with severe prematurity, in quiet and active sleep and in prone and supine position. HRV was analyzed in time and frequency domain and with nonlinear parameters. The novelty of this approach lies in the combined use of parameters generally adopted in fetal monitoring and "adult" indices. Results show that most parameters succeed in classifying different experimental conditions. This is very promising as our final objective is to identify a set of parameters that could be the basis for a risk classifier to improve the care path of premature population. PMID:25571458
2013-01-01
Background Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. Results We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as “pathwise”. The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. Conclusions As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the
Dynamic modelling and stability parametric analysis of a flexible spacecraft with fuel slosh
NASA Astrophysics Data System (ADS)
Gasbarri, Paolo; Sabatini, Marco; Pisculli, Andrea
2016-10-01
Modern spacecraft often contain large quantities of liquid fuel to execute station keeping and attitude manoeuvres for space missions. In general the combined liquid-structure system is very difficult to model, and the analyses are based on some assumed simplifications. A realistic representation of the liquid dynamics inside closed containers can be approximated by an equivalent mechanical system. This technique can be considered a very useful mathematical tool for solving the complete dynamics problem of a space-system containing liquid. Thus they are particularly useful when designing a control system or to study the stability margins of the coupled dynamics. The commonly used equivalent mechanical models are the mass-spring models and the pendulum models. As far as the spacecraft modelling is concerned they are usually considered rigid; i.e. no flexible appendages such as solar arrays or antennas are considered when dealing with the interaction of the attitude dynamics with the fuel slosh. In the present work the interactions among the fuel slosh, the attitude dynamics and the flexible appendages of a spacecraft are first studied via a classical multi-body approach. In particular the equations of attitude and orbit motion are first derived for the partially liquid-filled flexible spacecraft undergoing fuel slosh; then several parametric analyses will be performed to study the stability conditions of the system during some assigned manoeuvers. The present study is propaedeutic for the synthesis of advanced attitude and/or station keeping control techniques able to minimize and/or reduce an undesired excitation of the satellite flexible appendages and of the fuel sloshing mass.
Parametric Analysis of a Turbine Trip Event in a BWR Using a 3D Nodal Code
Gorzel, A.
2006-07-01
Two essential thermal hydraulics safety criteria concerning the reactor core are that even during operational transients there is no fuel melting and not-permissible cladding temperatures are avoided. A common concept for boiling water reactors is to establish a minimum critical power ratio (MCPR) for steady state operation. For this MCPR it is shown that only a very small number of fuel rods suffers a short-term dryout during the transient. It is known from experience that the limiting transient for the determination of the MCPR is the turbine trip with blocked bypass system. This fast transient was simulated for a German BWR by use of the three-dimensional reactor analysis transient code SIMULATE-3K. The transient behaviour of the hot channels was used as input for the dryout calculation with the transient thermal hydraulics code FRANCESCA. By this way the maximum reduction of the CPR during the transient could be calculated. The fast increase in reactor power due to the pressure increase and to an increased core inlet flow is limited mainly by the Doppler effect, but automatically triggered operational measures also can contribute to the mitigation of the turbine trip. One very important method is the short-term fast reduction of the recirculation pump speed which is initiated e. g. by a pressure increase in front of the turbine. The large impacts of the starting time and of the rate of the pump speed reduction on the power progression and hence on the deterioration of CPR is presented. Another important procedure to limit the effects of the transient is the fast shutdown of the reactor that is caused when the reactor power reaches the limit value. It is shown that the SCRAM is not fast enough to reduce the first power maximum, but is able to prevent the appearance of a second - much smaller - maximum that would occur around one second after the first one in the absence of a SCRAM. (author)
Introduction to Time Series Analysis
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1986-01-01
The field of time series analysis is explored from its logical foundations to the most modern data analysis techniques. The presentation is developed, as far as possible, for continuous data, so that the inevitable use of discrete mathematics is postponed until the reader has gained some familiarity with the concepts. The monograph seeks to provide the reader with both the theoretical overview and the practical details necessary to correctly apply the full range of these powerful techniques. In addition, the last chapter introduces many specialized areas where research is currently in progress.
Ruiz-Sanchez, Eduardo
2015-12-01
The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata.
The LTS timing analysis program :
Armstrong, Darrell Jewell; Schwarz, Jens
2013-08-01
The LTS Timing Analysis program described in this report uses signals from the Tempest Lasers, Pulse Forming Lines, and Laser Spark Detectors to carry out calculations to quantify and monitor the performance of the the Z-Accelerators laser triggered SF6 switches. The program analyzes Z-shots beginning with Z2457, when Laser Spark Detector data became available for all lines.
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N. K. G.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.
Christopoulou, Maria; Karabetsos, Efthymios
2015-04-01
From 2008 through 2013, more than 6,000 in situ frequency selective audits, in the proximity of base stations, were conducted throughout Greece by the Greek Atomic Energy Commission (EEAE), in order to verify exposure limit compliance. EEAE is the competent national authority for protection of the general public against artificially produced non-ionizing radiation. This paper presents the first post processing and multi-parametric year statistical analysis of in situ measurement data corresponding to 4,705 audits in the whole country, compared to general public exposure levels, according to Greek legislation. The aim is to derive nationwide conclusions for the characterization of general public exposure to radiofrequency electromagnetic fields, during the last 6 years. The results' presentation includes electric field exposure ratios referring to broadband and frequency selective measurements at the highest exposure measurement point. Statistical analysis is applied to assist the data presentation and evaluation, based on selected criteria and classification parameters, including: (i) year (2008-2013); (ii) environment (urban/suburban/rural); (iii) frequency bands of selected common telecommunication services (e.g., TV, FM, GSM, DCS, UMTS); and (iv) number of service providers installed at the same site. In general, measurement results revealed that the vast majority of exposure values were below reference levels for general public exposure, as defined by Greek legislation. Data are constantly updated with the latest measurements, including emerging wireless technologies. PMID:25726724
Christopoulou, Maria; Karabetsos, Efthymios
2015-04-01
From 2008 through 2013, more than 6,000 in situ frequency selective audits, in the proximity of base stations, were conducted throughout Greece by the Greek Atomic Energy Commission (EEAE), in order to verify exposure limit compliance. EEAE is the competent national authority for protection of the general public against artificially produced non-ionizing radiation. This paper presents the first post processing and multi-parametric year statistical analysis of in situ measurement data corresponding to 4,705 audits in the whole country, compared to general public exposure levels, according to Greek legislation. The aim is to derive nationwide conclusions for the characterization of general public exposure to radiofrequency electromagnetic fields, during the last 6 years. The results' presentation includes electric field exposure ratios referring to broadband and frequency selective measurements at the highest exposure measurement point. Statistical analysis is applied to assist the data presentation and evaluation, based on selected criteria and classification parameters, including: (i) year (2008-2013); (ii) environment (urban/suburban/rural); (iii) frequency bands of selected common telecommunication services (e.g., TV, FM, GSM, DCS, UMTS); and (iv) number of service providers installed at the same site. In general, measurement results revealed that the vast majority of exposure values were below reference levels for general public exposure, as defined by Greek legislation. Data are constantly updated with the latest measurements, including emerging wireless technologies.
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; Manley, Dawn Kataoka
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed naturalmore » gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.« less
NASA Technical Reports Server (NTRS)
Guerreiro, Nelson M.; Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Lewis, Timothy A.
2016-01-01
A loss-of-separation (LOS) is said to occur when two aircraft are spatially too close to one another. A LOS is the fundamental unsafe event to be avoided in air traffic management and conflict detection (CD) is the function that attempts to predict these LOS events. In general, the effectiveness of conflict detection relates to the overall safety and performance of an air traffic management concept. An abstract, parametric analysis was conducted to investigate the impact of surveillance quality, level of intent information, and quality of intent information on conflict detection performance. The data collected in this analysis can be used to estimate the conflict detection performance under alternative future scenarios or alternative allocations of the conflict detection function, based on the quality of the surveillance and intent information under those conditions.Alternatively, this data could also be used to estimate the surveillance and intent information quality required to achieve some desired CD performance as part of the design of a new separation assurance system.
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; Manley, Dawn Kataoka
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed natural gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.
Zhang, Jing; Mitsis, Effie M; Chu, Kingwai; Newmark, Randall E; Hazlett, Erin A; Buchsbaum, Monte S
2010-01-01
In this study we investigated regional cerebral glucose metabolism abnormalities of [(18)F] fluorodeoxyglucose (FDG) positron emission tomography (PET) imaging in traumatic brain injury (TBI). PET images of 81 TBI patients and 68 normal controls were acquired and a word list learning task was administered during the uptake period. The TBI group included 35 patients with positive structural imaging (CT or MRI) findings soon after injury, 40 patients with negative findings, and 6 cases without structural imaging. Statistical parametric mapping (SPM) analysis was applied with several levels of spatial smoothing. Cluster counting analysis was performed for each subject to identify abnormal clusters with contiguous voxel values that deviated by two standard deviations or more from the mean of the normal controls, and to count the number of clusters in 10 size categories. SPM maps demonstrated that the 81 patients had significantly lower FDG uptake than normal controls, widely across the cortex (including bilateral frontal and temporal regions), and in the thalamus. Cluster counting results indicated that TBI patients had a higher proportion of larger clusters than controls. These large low-FDG-uptake clusters of the TBI patients were closer to the brain edge than those of controls. These results suggest that deficits of cerebral metabolism in TBI are spread over multiple brain areas, that they are closer to the cortical surface than clusters in controls, and that group spatial patterns of abnormal cerebral metabolism may be similar in TBI patients with cognitive deficits with and without obvious acute abnormalities identified on structural imaging.
NASA Astrophysics Data System (ADS)
Nguyen, Frédéric; Hermans, Thomas
2015-04-01
Inversion of time-lapse resistivity data allows obtaining 'snapshots' of changes occurring in monitored systems for applications such as aquifer storage, geothermal heat exchange, site remediation or tracer tests. Based on these snapshots, one can infer qualitative information on the location and morphology of changes occurring in the subsurface but also quantitative estimates on the degree of changes in certain property such as temperature or total dissolved solid content. Analysis of these changes can provide direct insight into flow and transport and associated processes and controlling parameters. However, the reliability of the analysis is dependent on survey geometry, measurement schemes, data error, and regularization. Survey design parameters may be optimized prior to the monitoring survey. Regularization, on the other hand, may be chosen depending on available information collected during the monitoring. Common approaches consider smoothing model changes both in space and time but it is often needed to obtain a sharp temporal anomaly, for example in fractured aquifers. We here propose to use the alternative regularization approach based on minimum gradient support (MGS) (Zhdanov, 2002) for time-lapse surveys which will focus the changes in tomograms snapshots. MGS will limit the occurrences of changes in electrical resistivity but will also restrict the variations of these changes inside the different zones. A commonly encountered difficulty by practitioners in this type of regularization is the choice of an additional parameter, the so-called β, required to define the MGS functional. To the best of our knowledge, there is no commonly accepted or standard methodology to optimize the MGS parameter β. The inversion algorithm used in this study is CRTomo (Kemna 2000). It uses a Gauss-Newton scheme to iteratively minimize an objective function which consists of a data misfit functional and a model constraint functional. A univariate line search is performed
Fields, S.R.
1980-11-26
The generation of the response spectra was coupled to a parametric and sensitivity analysis. Support accelerations and tiedown forces are presented as functions of time. The parametric analysis found that the horizontal acceleration of the support and the MAR (max absolute relative) horizontal acceleration are relatively insensitive, while the corresponding vertical accelerations are highly sensitive to changes in 4 of the 13 parameters, and the corresponding rotational accelerations are highly sensitive to changes in 8 of the 13 parameters. The tiedown forces are moderately sensitive to changes in 3 of the parameters. (DLC)
Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.
2012-01-01
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408
Arahira, Shin; Murai, Hitoshi; Sasaki, Hironori
2016-08-22
In this paper we report the generation of wavelength-division-multiplexed, time-bin entangled photon pairs by using cascaded optical second nonlinearities (sum-frequency generation and subsequent spontaneous parametric downconversion) in a periodically poled LiNbO_{3} device. Visibilities of approximately 94% were clearly observed in two-photon interference experiments for all the wavelength-multiplexed channels under investigation (five pairs), with insensitivity to the polarization states of the photon pairs. We also evaluated the performances in terms of quantum-key-distribution (QKD) applications by using four single-photon detectors, which enables to evaluate the QKD performance properly. The results showed long-term stability over 70 hours, maintaining approximately 3% of the quantum error rate and 110 bit/s of the sifted key rate. PMID:27557236
Parametric study of turbine blade platform friction damping using the lumped parameter analysis
NASA Technical Reports Server (NTRS)
Dominic, R. J.
1984-01-01
The hardware configuration used in the present study of turbine blade planform friction damping, by means of the lumped parameter analysis, is the first turbine stage of the Space Shuttle Main Engine's High Pressure Fuel Turbopump. The analysis procedure solves the nonlinear equations of motion for a turbine blade that is acted on by a platform friction damper, using an iterative matrix method. Attention is given to the effects on blade deflection response of variations in friction coefficient, the normal force on the friction surface interface, blade hysteretic damping, the blade-to-blade phase angle of the harmonic forcing function, and the amplitude of the forcing function.
Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; Sutton, Michael A.
2016-01-25
Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
A multiscale approach to InSAR time series analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y. N.; Dicaprio, C.; Rickerby, A.
2008-12-01
We describe a new technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale analysis of InSAR Time Series), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. This approach also permits a consistent treatment of all data independent of the presence of localized holes in any given interferogram. In essence, MInTS allows one to considers all data at the same time (as opposed to one pixel at a time), thereby taking advantage of both spatial and temporal characteristics of the deformation field. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to includes a set of general functions (e.g., splines) in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate our results by comparing with ground-based observations.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Real-time analysis keratometer
NASA Technical Reports Server (NTRS)
Adachi, Iwao P. (Inventor); Adachi, Yoshifumi (Inventor); Frazer, Robert E. (Inventor)
1987-01-01
A computer assisted keratometer in which a fiducial line pattern reticle illuminated by CW or pulsed laser light is projected on a corneal surface through lenses, a prismoidal beamsplitter quarterwave plate, and objective optics. The reticle surface is curved as a conjugate of an ideal corneal curvature. The fiducial image reflected from the cornea undergoes a polarization shift through the quarterwave plate and beamsplitter whereby the projected and reflected beams are separated and directed orthogonally. The reflected beam fiducial pattern forms a moire pattern with a replica of the first recticle. This moire pattern contains transverse aberration due to differences in curvature between the cornea and the ideal corneal curvature. The moire pattern is analyzed in real time by computer which displays either the CW moire pattern or a pulsed mode analysis of the transverse aberration of the cornea under observation, in real time. With the eye focused on a plurality of fixation points in succession, a survey of the entire corneal topography is made and a contour map or three dimensional plot of the cornea can be made as a computer readout in addition to corneal radius and refractive power analysis.
Timing analysis by model checking
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
Rayleigh-type parametric chemical oscillation
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation
NASA Astrophysics Data System (ADS)
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-01
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation.
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation.
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions. PMID:26429035
Moran, John L; Solomon, Patricia J
2007-06-01
In Part I, we reviewed graphical display and data summary, followed by a consideration of linear regression models. Generalised linear models, structured in terms of an exponential response distribution and link function, are now introduced, subsuming logistic and Poisson regression. Time-to-event ("survival") analysis is developed from basic principles of hazard rate, and survival, cumulative distribution and density functions. Semi-parametric (Cox) and parametric (accelerated failure time) regression models are contrasted. Time-series analysis is explicated in terms of trend, seasonal, and other cyclical and irregular components, and further illustrated by development of a classical Box-Jenkins ARMA (autoregressive moving average) model for monthly ICU-patient hospital mortality rates recorded over 11 years. Multilevel (random-effects) models and principles of meta-analysis are outlined, and the review concludes with a brief consideration of important statistical aspects of clinical trials: sample size determination, interim analysis and "early stopping".
Computational Parametric Analysis of Mechanical Behaviors of Celotex Implanted with Glue Plates
Gong, C.
2001-02-20
The purpose of this analysis of the Celotex implanted with glue plates is two-fold, first is to identify the cause of the initial stress peak in the pseudo engineering stress-strain curve in the dynamic impact test that the impact is loaded in the orientation parallel to the plane of the glue. Secondly, from the existing static mechanical properties to derive the true constitutive properties of the Celotex under dynamic impact and other environmental conditions, such as warm (250 degrees Fahrenheit), wet (100 percent relative humidity), cold (minus 40 degrees Fahrenheit), and desiccated.
Shiozawa, Kazue; Watanabe, Manabu; Ikehara, Takashi; Matsukiyo, Yasushi; Kogame, Michio; Shinohara, Mie; Kikuchi, Yoshinori; Shinohara, Masao; Igarashi, Yoshinori; Sumino, Yasukiyo
2016-02-01
We aimed to determine the usefulness of arrival time parametric imaging (AtPI) using contrast-enhanced ultrasonography (CEUS)with Sonazoid in the evaluation of early response to sorafenib for hepatocellular carcinoma (HCC). Thirteen ad- vanced HCC patients with low a / -fetoprotein (AFP) level (≤35 ng/mL) who received sorafenib for at least 4 weeks were enrolled in this study. CEUS was performed before and after treatment (2 weeks), and the images of the target lesion in the arterial phase were analyzed by AtPI. In the color mapping images obtained by AtPI, the mean arrival time of the contrast agent in the target lesion from the starting point (mean time: MT) was calculated. In each patient, differences between MT before and MT 2 weeks after treatment were compared. MT (+) and MT(-) groups were designated as such if the difference was 0 or greater(blood flow velocity of the lesion was reduced)and less than 0 sec(blood flow velocity of the lesion was increased), respectively. The overall survival was evaluated between the 2 groups. In the MT (+) group (7 patients) and MT (-) group (6 patients), the median survival times were 307 and 208 days, respectively, which was statistically significant. We suggest AtPI is useful for evaluating early response to sorafenib in advanced HCC patients with low AFP level. PMID:27067685
Parametric studies of penetration events : a design and analysis of experiments approach.
Chiesa, Michael L.; Marin, Esteban B.; Booker, Paul M.
2005-02-01
A numerical screening study of the interaction between a penetrator and a geological target with a preformed hole has been carried out to identify the main parameters affecting the penetration event. The planning of the numerical experiment was based on the orthogonal array OA(18,7,3,2), which allows 18 simulation runs with 7 parameters at 3 levels each. The strength of 2 of the array allows also for two-factor interaction studies. The seven parameters chosen for this study are: penetrator offset, hole diameter, hole taper, vertical and horizontal velocity of the penetrator, angle of attack of the penetrator and target material. The analysis of the simulation results has been based on main effects plots and analysis of variance (ANOVA), and it has been performed using three metrics: the maximum values of the penetration depth, penetrator deceleration and plastic strain in the penetrator case. This screening study shows that target material has a major influence on penetration depth and penetrator deceleration, while penetrator offset has the strongest effect on the maximum plastic strain.
Parametric Analysis of NO2 Gas Sensor Based on Carbon Nanotubes
NASA Astrophysics Data System (ADS)
Naje, Asama N.; Ibraheem, Russul R.; Ibrahim, Fuad T.
2016-06-01
Two types of carbon nanotubes [single walled nanotubes (SWCNTs) and multi walled carbon nanotubes (MWCNTs)] are deposited on porous silicon by the drop casting technique. Upon exposure to test gas mixing ratio 3% NO2, the sensitivity response results show that the SWCNTs' sensitivity reaches to 79.8%, where MWCNTs' is 59.6%. The study shows that sensitivity response of the films increases with an increase in the operating temperature up to 200° and 150° for MWCNTs and SWCNTs. The response and recovery time is about 19 s and 54 s at 200° for MWCNTs, respectively, and 20 s and 56 s at 150° for SWCNTs.
Parametric sensitivity for frequency response analysis of large-scale flows
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter
2014-11-01
When studying the frequency response of globally stable flows, direct and adjoint information from a resolvent analysis has to be computed. These computations involve a sizeable amount of effort, which suggests their reuse to identify sensitivity measures to changes in the governing parameters, base/mean flow fields, boundary conditions or other changes to the underlying linearized operator. We introduce and demonstrate a general technique to determine first-order changes in the frequency response induced by general changes to the governing equations. Examples will include changes to the Reynolds and Mach number for a tonal-noise airfoil problem, sensitivity to heating of a mixing layer past a splitter plate and closeness to global instability for a simplified model equation.
NASA Astrophysics Data System (ADS)
Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang
2012-06-01
Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.
Parametric analysis of stand-alone residential photovoltaic systems and the SOLSTOR simulation model
NASA Astrophysics Data System (ADS)
Caskey, D. L.; Aronson, E. A.; Murphy, K. D.
Grid-connected residential photovoltaic (PV) systems have been studied in great detail during the past few years. However, stand-alone systems have received considerably less attention. This paper describes the results of an evaluation of the economic feasibility of stand-alone systems. The SOLSTOR simulation program, developed by Sandia, was the primary analysis tool. The results indicate that stand-alone PV systems offer considerable economic advantage over the fossil-fueled generator systems. This is true even with no escalation of fuel prices, with PV array costs of twice the 1986 DOE goal, with present day battery costs, and in the Northeast as well as in the Southwest part of the United States. The on-site generator was generally used less than 1400 hours per year, and in fact can be eliminated in many cases in the Southwest.
Parametric Analysis of PWR Spent Fuel Depletion Parameters for Long-Term-Disposal Criticality Safety
DeHart, M.D.
1999-08-01
Utilization of burnup credit in criticality safety analysis for long-term disposal of spent nuclear fuel allows improved design efficiency and reduced cost due to the large mass of fissile material that will be present in the repository. Burnup-credit calculations are based on depletion calculations that provide a conservative estimate of spent fuel contents (in terms of criticality potential), followed by criticality calculations to assess the value of the effective neutron multiplication factor (k(sub)eff) for the a spent fuel cask or a fuel configuration under a variety of probabilistically derived events. In order to ensure that the depletion calculation is conservative, it is necessary to both qualify and quantify assumptions that can be made in depletion models.
Parametric analysis of synthetic aperture radar data for characterization of deciduous forest stands
NASA Technical Reports Server (NTRS)
Wu, Shih-Tseng
1987-01-01
The SAR sensor parameters that affect the estimation of deciduous forest stand characteristics were examined using data sets for the Gulf Coastal Plain region, acquired by the NASA/JPL multipolarization airborne SAR. In the regression analysis, the mean digital-number values of the three polarization data are used as the independent variables to estimate the average tree height (HT), basal area (BA), and total-tree biomass (TBM). The following results were obtained: (1) in the case of simple regression and using 28 plots, vertical-vertical (VV) polarization yielded the largest correlation coefficients (r) in estimating HT, BA, and TBM; (2) in the case of multiple regression, the horizontal-horizontal (HH) and VV polarization combination yielded the largest r value in estimating HT, while the VH and HH polarization combination yielded the largest r values in estimating BA and TBM. With the addition of a third polarization, the increase in r values is insignificant.
Parametric Hazard Function Estimation.
1999-09-13
Version 00 Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking ofmore » the model assumptions.« less
Multipass optical parametric amplifier
Jeys, T.H.
1996-08-01
A compact, low-threshold, multipass optical parametric amplifier has been developed for the conversion of short-pulse (360-ps) 1064-nm Nd:YAG laser radiation into eye-safe 1572-nm radiation for laser ranging and radar applications. The amplifier had a threshold pump power of as low as 45{mu}J, and at three to four times this threshold pump power the amplifier converted 30{percent} of the input 1064-nm radiation into 1572-nm output radiation. {copyright} {ital 1996 Optical Society of America.}
NASA Technical Reports Server (NTRS)
Turner, R. E.
1977-01-01
For 36 hours during April 1975, an atmospheric variability experiment was conducted. This research effort supported an observational program in which rawinsonde data, radar data, and satellite data were collected from a network of 42 stations east of the Rocky Mountains at intervals of 3 hours. This program presents data with a high degree of time resolution over a spatially and temporally extensive network. Reduction of the experiment data is intended primarily as a documentation of the checking and processing of the data and should be useful to prospective users. Various flow diagrams of the data processing procedures are described, and a complete summary of the formulas used in the data processing is provided. A wind computation scheme designed to extract as much detailed wind information as possible from the unique experiment data set is discussed. The accuracy of the thermodynamic and wind data were estimated. Errors in the thermodynamic and wind data are given.
Parametric analysis of a novel cryogenic CO2 capture system based on Stirling coolers.
Song, Chun Feng; Kitamura, Yutaka; Li, Shu Hong; Jiang, Wei Zhong
2012-11-20
CO(2) capture and storage (CCS) is an important alternative to control greenhouse gas (GHG) effects. In previous work, a novel desublimation CO(2) capture process has been exploited making use of three free piston Stirling coolers (namely, SC-1, SC-2, and SC-3, respectively). Based on the developed system, moisture and CO(2) in the flue gas can condense and desublimate in the prefreezing and main-freezing towers, respectively. Meanwhile, the storage column is chilled by SC-3 to preserve the frosted CO(2), and permanent gas (such as N(2)) passes through the system without phase change. The whole process can be implemented at atmospheric pressure and reduce the energy penalty (e.g., solvent regeneration and pressure drop) in other technologies. In this work, the influence of process parameters has been investigated in detail. The optimal conditions for the system are as follows: idle operating time is 240 min, flow rate is 5 L/min, vacuum degree of the interlayer is 2.2 × 10(3) Pa, and temperatures of SC-1, -2, and -3 are -30, -120, and -120 °C, respectively. Under these conditions, the energy consumption of the system is around 0.5 MJ(electrical)/kg CO(2) with above 90% CO(2) recovery.
Data Fusion: A decision analysis tool that quantifies geological and parametric uncertainty
Porter, D.W.
1996-04-01
Engineering projects such as siting waste facilities and performing remediation are often driven by geological and hydrogeological uncertainties. Geological understanding and hydrogeological parameters such as hydraulic conductivity are needed to achieve reliable engineering design. Information from non-invasive and minimally invasive data sets offers potential for reduction in uncertainty, but a single data type does not usually meet all needs. Data Fusion uses Bayesian statistics to update prior knowledge with information from diverse data sets as the data is acquired. Prior knowledge takes the form of first principles models (e.g., groundwater flow) and spatial continuity models for heterogeneous properties. The variability of heterogeneous properties is modeled in a form motivated by statistical physics as a Markov random field. A computer reconstruction of targets of interest is produced within a quantified statistical uncertainty. The computed uncertainty provides a rational basis for identifying data gaps for assessing data worth to optimize data acquisition. Further, the computed uncertainty provides a way to determine the confidence of achieving adequate safety margins in engineering design. Beyond design, Data Fusion provides the basis for real time computer monitoring of remediation. Working with the DOE Office of Technology (OTD), the author has developed and patented a Data Fusion Workstation system that has been used on jobs at the Hanford, Savannah River, Pantex and Fernald DOE sites. Further applications include an army depot at Letterkenney, PA and commercial industrial sites.
Parametric Analysis of Life Support Systems for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly S.; Bagdigian, Bob M.
2011-01-01
The National Aeronautics and Space Administration is in a process of evaluating future targets for space exploration. In order to maintain the welfare of a crew during future missions, a suite of life support technology is responsible for oxygen and water generation, carbon dioxide control, the removal of trace concentrations of organic contaminants, processing and recovery of water, and the storage and reclamation of solid waste. For each particular life support subsystem, a variety competing technologies either exist or are under aggressive development efforts. Each individual technology has strengths and weaknesses with regard to launch mass, power and cooling requirements, volume of hardware and consumables, and crew time requirements for operation. However, from a system level perspective, the favorability of each life support architecture is better assessed when the sub-system technologies are analyzed in aggregate. In order to evaluate each specific life support system architecture, the measure of equivalent system mass (ESM) was employed to benchmark system favorability. Moreover, the results discussed herein will be from the context of loop-closure with respect to the air, water, and waste sub-systems. Specifically, closure relates to the amount of consumables mass that crosses the boundary of the vehicle over the lifetime of a mission. As will be demonstrated in this manuscript, the optimal level of loop closure is heavily dependent upon mission requirements such as duration and the level of extra-vehicular activity (EVA) performed. Sub-system level trades were also considered as a function of mission duration to assess when increased loop closure is practical. Although many additional factors will likely merit consideration in designing life support systems for future missions, the ESM results described herein provide a context for future architecture design decisions toward a flexible path program.
Comparison of nonparametric trend analysis according to the types of time series data
NASA Astrophysics Data System (ADS)
Heo, J.; Shin, H.; Kim, T.; Jang, H.; Kim, H.
2013-12-01
In the analysis of hydrological data, the determination of the existence of overall trend due to climate change has been a major concern and the important part of design and management of water resources for the future. The existence of trend could be identified by plotting hydrologic time series. However, statistical methods are more accurate and objective tools to perform trend analysis. Statistical methods divided into parametric and nonparametric methods. In the case of parametric method, the population should be assumed to be normally distributed. However, most of hydrological data tend to be represented by non-normal distribution, then the nonparametric method considered more suitable than parametric method. In this study, simulations were performed with different types of time series data and four nonparametric methods (Mann-Kendall test, Spearman's rho test, SEN test, and Hotelling-Pabst test) generally used in trend analysis were applied to assess the power of each trend analysis. The time series data were classified into three types which are Trend+Random, Trend+Cycle+Random, and Trend+Non-random. In order to add a change to the data, 11 kinds of different slopes were overlapped at each simulation. As the results, nonparametric methods have almost similar power for Trend+random type and Trend+Non-random series. On the other hand, Mann-Kendall and SEN tests have slightly higher power than Spearman's rho and Hotelling-Pabst tests for Trend+Cycle+Random series.
Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.
2010-11-15
We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly {approx}10{sup 3} in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be {sigma}(B{sub T}){approx} 0.13 mag for the brightest galaxies, rising to {approx} 0.3 mag for galaxies at the faint end of our sample (B{sub T} {approx} 16). The distribution of axial ratios of low-mass ('dwarf') galaxies bears a strong resemblance to the one observed for the higher-mass ('giant') galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a {approx}7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.
Lee, Kyu Ha; Haneuse, Sebastien; Schrag, Deborah; Dominici, Francesca
2014-01-01
Summary In the U.S., the Centers for Medicare and Medicaid Services uses 30-day readmission, following hospitalization, as a proxy outcome to monitor quality of care. These efforts generally focus on treatable health conditions, such as pneumonia and heart failure. Expanding quality of care systems to monitor conditions for which treatment options are limited or non-existent, such as pancreatic cancer, is challenging because of the non-trivial force of mortality; 30-day mortality for pancreatic cancer is approximately 30%. In the statistical literature, data that arise when the observation of the time to some non-terminal event is subject to some terminal event are referred to as ‘semi-competing risks data’. Given such data, scientific interest may lie in at least one of three areas: (i) estimation/inference for regression parameters, (ii) characterization of dependence between the two events, and (iii) prediction given a covariate profile. Existing statistical methods focus almost exclusively on the first of these; methods are sparse or non-existent, however, when interest lies with understanding dependence and performing prediction. In this paper we propose a Bayesian semi-parametric regression framework for analyzing semi-competing risks data that permits the simultaneous investigation of all three of the aforementioned scientific goals. Characterization of the induced posterior and posterior predictive distributions is achieved via an efficient Metropolis-Hastings-Green algorithm, which has been implemented in an R package. The proposed framework is applied to data on 16,051 individuals diagnosed with pancreatic cancer between 2005-2008, obtained from Medicare Part A. We found that increased risk for readmission is associated with a high comorbidity index, a long hospital stay at initial hospitalization, non-white race, male, and discharge to home care. PMID:25977592
Waentig, Larissa; Techritz, Sandra; Jakubowski, Norbert; Roos, Peter H
2013-11-01
The paper presents a new multi-parametric protein microarray embracing the multi-analyte capabilities of laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The combination of high throughput reverse phase protein microarrays with element tagged antibodies and LA-ICP-MS makes it possible to detect and quantify many proteins or biomarkers in multiple samples simultaneously. A proof of concept experiment is performed for the analysis of cytochromes particularly of cytochrome P450 enzymes, which play an important role in the metabolism of xenobiotics such as toxicants and drugs. With the aid of the LA-ICP-MS based multi-parametric reverse phase protein microarray it was possible to analyse 8 cytochromes in 14 different proteomes in one run. The methodology shows excellent detection limits in the lower amol range and a very good linearity of R(2) ≥ 0.9996 which is a prerequisite for the development of further quantification strategies.
Simple parametrization of fragment reduced widths in heavy ion collisions.
Tripathi, R K; Townsend, L W
1994-04-01
A systematic analysis of the observed reduced widths obtained in relativistic heavy ion fragmentation reactions is used to develop a phenomenological parametrization of these data. The parametrization is simple, accurate, and completely general in applicability.
Parametric Time-Dependent Navier-Stokes Computations for a YAV-8B Harrier in Ground Effect
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Pandya, Shishir; Ahmad, Jasim; Murman, Scott; Kwak, Dochan (Technical Monitor)
2002-01-01
The Harrier Jump Jet has the distinction of being the only powered-lift aircraft in the free world to achieve operational status and to have flown in combat. This V/STOL aircraft can take-off and land vertically or utilize very short runways by directing its four exhaust nozzles towards the ground. Transition to forward flight is achieved by rotating these nozzles into a horizontal position. Powered-lift vehicles have certain advantages over conventional strike fighters. Their V/STOL capabilities allow for safer carrier operations, smaller carrier size, and quick reaction time for troop support. Moreover, they are not dependent on vulnerable land-based runways. The AV-8A Harrier first entered service in the British Royal Air Force (RAF) during 1969, and the U.S. Marine Corps (USMC) in 1971. The AV-8B was a redesign to achieve improved payload capacity, range, and accuracy. This modified design first entered service with the USMC and RAF in 1985. The success and unique capabilities of the Harrier has prompted the design of a powered-lift version of the Joint Strike Fighter (JSF). The flowfield for the Harrier near the ground during low-speed or hover flight operations is very complex and time-dependent. A sketch of this flowfield is shown. Warm air from the fan is exhausted from the front nozzles, while a hot air/fuel mixture from the engine is exhausted from the rear nozzles. These jets strike the ground and move out radially forming a ground jet-flow. The ambient freestream, due to low-speed forward flight or - headwind during hover, opposes the jet-flow. This interaction causes the flow to separate and form a ground vortex. The multiple jets also interact with each other near the ground and form an upwash or jet fountain, which strikes the underside of the fuselage. If the aircraft is sufficiently close to the ground, the inlet can ingest ground debris and hot gases from the fountain and ground vortex. This Hot Gas Ingestion (HGI) can cause a sudden loss of
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, N.; Dicaprio, C. J.
2010-12-01
We present a technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale InSAR Time Series analysis), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. As opposed to single pixel InSAR time series techniques, MInTS takes advantage of both spatial and temporal characteristics of the deformation field. We use a weighting scheme which accounts for the presence of localized holes due to decorrelation or unwrapping errors in any given interferogram. We represent time-dependent deformation using a dictionary of general basis functions, capable of detecting both steady and transient processes. The estimation is regularized using a model resolution based smoothing so as to be able to capture rapid deformation where there are temporally dense radar acquisitions and to avoid oscillations during time periods devoid of acquisitions. MInTS also has the flexibility to explicitly parametrize known time-dependent processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). We use cross validation to choose the regularization penalty parameter in the inversion of for the time-dependent deformation field. We demonstrate MInTS using a set of 63 ERS-1/2 and 29 Envisat interferograms for Long Valley Caldera.
NASA Astrophysics Data System (ADS)
Pageot, Damien; Operto, Stéphane; Vallée, Martin; Brossier, Romain; Virieux, Jean
2013-06-01
The development of dense networks of broad-band seismographs makes teleseismic data amenable to full-waveform inversion (FWI) methods for high-resolution lithospheric imaging. Compared to scattered-field migration, FWI seeks to involve the full seismic wavefield in the inversion. We present a parametric analysis of 2-D frequency-domain FWI in the framework of lithospheric imaging from teleseismic data to identify the main factors that impact on the quality of the reconstructed compressional (P)-wave and shear (S)-wave speed models. Compared to controlled-source seismology, the main adaptation of FWI to teleseismic configuration consists of the implementation with a scattered-filed formulation of plane-wave sources that impinge on the base of the lithospheric target located below the receiver network at an arbitrary incidence angle. Seismic modelling is performed with a hp-adaptive discontinuous Galerkin method on unstructured triangular mesh. A quasi-Newton inversion algorithm provides an approximate accounting for the Hessian operator, which contributes to reduce the footprint of the coarse acquisition geometry in the imaging. A versatile algorithm to compute the gradient of the misfit function with the adjoint-state method allows for abstraction between the forward-problem operators and the meshes that are during seismic modelling and inversion, respectively. An approximate correction for obliquity is derived for future application to real teleseismic data under the two-dimension approximation. Comparisons between the characteristic scales involved in exploration geophysics and in teleseismic seismology suggest that the resolution gain provided by full waveform technologies should be of the same order of magnitude for both applications. We first show the importance of the surface-reflected wavefield to dramatically improve the resolving power of FWI by combining tomography-like and migration-like imaging through the incorporation of the forward-scattered and the
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.
1993-01-01
Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.
Tempo: Pulsar timing data analysis
NASA Astrophysics Data System (ADS)
Manchester, R.; Taylor, J.; Peters, W.; Weisberg, J.; Irwin, A.; Wex, N.; Stairs, I.; Demorest, P.; Nice, D.
2015-09-01
Tempo analyzes pulsar timing data. Pulse times of arrival (TOAs), pulsar model parameters, and coded instructions are read from one or more input files. The TOAs are fitted by a pulse timing model incorporating transformation to the solar-system barycenter, pulsar rotation and spin-down and, where necessary, one of several binary models. Program output includes parameter values and uncertainties, residual pulse arrival times, chi-squared statistics, and the covariance matrix of the model. In prediction mode, ephemerides of pulse phase behavior (in the form of polynomial expansions) are calculated from input timing models. Tempo is the basis for the Tempo2 (ascl:1210.015) code.
Ordinal analysis of time series
NASA Astrophysics Data System (ADS)
Keller, K.; Sinn, M.
2005-10-01
In order to develop fast and robust methods for extracting qualitative information from non-linear time series, Bandt and Pompe have proposed to consider time series from the pure ordinal viewpoint. On the basis of counting ordinal patterns, which describe the up-and-down in a time series, they have introduced the concept of permutation entropy for quantifying the complexity of a system behind a time series. The permutation entropy only provides one detail of the ordinal structure of a time series. Here we present a method for extracting the whole ordinal information.
Zhu, C.-J.; Senin, A.A.; Lu, Z.-H.; Gao, J.; Xiao, Y.; Eden, J.G.
2005-08-15
The polarization characteristics of the signal wave produced in Rb vapor by difference-frequency, parametric four-wave mixing (FWM) has been investigated for either ultrafast ({approx}150 fs) or nanosecond time-scale excitation of the 5s{yields}{yields}5d, 7s two photon transitions. The electronic configurations of the 5d {sup 2}D{sub 5/2} and 7s {sup 2}S{sub 1/2} states of Rb, as well as their energy separation ({approx}608 cm{sup -1}), offers the opportunity to examine separately the resonantly enhanced 5s{yields}{yields}7s, 5d{yields}6p{yields}5s FWM pathways on the nanosecond time scale and then to drive both channels simultaneously with an ultrafast pulse of sufficient spectral width. As expected, dye laser ({approx}10 ns) excitation of the 5s{yields}{yields}5d (J=5/2) transition produces a signal wave ({lambda}{sub s}{approx}420 nm) having the same ellipticity as the driving optical field. Two photon excitation of Rb (7s) on the same time scale, however, generates an elliptically polarized signal when the pump is linearly polarized ({epsilon}=1), a result attributed to 7s{yields}6p, 5p amplified spontaneous emission at {approx}4 {mu}m and {approx}741 nm, respectively. Simultaneous excitation of the 5s{yields}{yields}7s, 5d transitions with {approx}150 fs pulses centered at {approx}770 nm yields polarization characteristics that can be approximated as a superposition of those for the individual transitions, thus displaying weak coupling between the two FWM channels. Also, the influence of molecular contributions to the FWM signal is observed for Rb number densities above {approx}5x10{sup 14} cm{sup -3}.
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Siow, Y. K.; Marek, C. J.
2005-01-01
Today s modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. The major advantages associated with the addition of ITB are an increase in thermal efficiency and reduction in NOx emission. Lower temperature peak in the main combustor results in lower thermal NOx emission and lower amount of cooling air required. This study focuses on a parametric (on-design) cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The ITB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB. Visual Basic program, Microsoft Excel macrocode, and Microsoft Excel neuron code are used to facilitate Microsoft Excel software to plot engine performance versus engine design parameters. This program computes and plots the data sequentially without forcing users to open other types of plotting programs. A user s manual on how to use the program is also included in this report. Furthermore, this stand-alone program is written in conjunction with an off-design program which is an extension of this study. The computed result of a selected design
Stability analysis of a time-periodic 2-dof MEMS structure
NASA Astrophysics Data System (ADS)
Kniffka, Till Jochen; Welte, Johannes; Ecker, Horst
2012-11-01
Microelectromechanical systems (MEMS) are becoming important for all kinds of industrial applications. Among them are filters in communication devices, due to the growing demand for efficient and accurate filtering of signals. In recent developments single degree of freedom (1-dof) oscillators, that are operated at a parametric resonances, are employed for such tasks. Typically vibration damping is low in such MEM systems. While parametric excitation (PE) is used so far to take advantage of a parametric resonance, this contribution suggests to also exploit parametric anti-resonances in order to improve the damping behavior of such systems. Modeling aspects of a 2-dof MEM system and first results of the analysis of the non-linear and the linearized system are the focus of this paper. In principle the investigated system is an oscillating mechanical system with two degrees of freedom x = [x1x2]T that can be described by Mx+Cx+K1x+K3(x2)x+Fes(x,V(t)) = 0. The system is inherently non-linear because of the cubic mechanical stiffness K3 of the structure, but also because of electrostatic forces (1+cos(ωt))Fes(x) that act on the system. Electrostatic forces are generated by comb drives and are proportional to the applied time-periodic voltage V(t). These drives also provide the means to introduce time-periodic coefficients, i.e. parametric excitation (1+cos(ωt)) with frequency ω. For a realistic MEM system the coefficients of the non-linear set of differential equations need to be scaled for efficient numerical treatment. The final mathematical model is a set of four non-linear time-periodic homogeneous differential equations of first order. Numerical results are obtained from two different methods. The linearized time-periodic (LTP) system is studied by calculating the Monodromy matrix of the system. The eigenvalues of this matrix decide on the stability of the LTP-system. To study the unabridged non-linear system, the bifurcation software ManLab is employed
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y.; Dicaprio, C. J.
2009-12-01
We describe progress in the development of MInTS (Multiscale analysis of InSAR Time Series), an approach to constructed self-consistent time-dependent deformation observations from repeated satellite-based InSAR images of a given region. MInTS relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. In essence, MInTS allows one to considers all data at the same time as opposed to one pixel at a time, thereby taking advantage of both spatial and temporal characteristics of the deformation field. This approach also permits a consistent treatment of all data independent of the presence of localized holes due to unwrapping issues in any given interferogram. Specifically, the presence of holes is accounted for through a weighting scheme that accounts for the extent of actual data versus the area of holes associated with any given wavelet. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to include a set of general functions in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. We also experiment with the use of sparsity inducing regularization as a way to select from a large dictionary of time functions. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate
Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham
2016-01-01
A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1984-01-01
The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.
Peterson, James T.
1999-12-01
Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network.
Time-Dependent Reliability Analysis
1999-10-27
FRANTIC-3 was developed to evaluate system unreliability using time-dependent techniques. The code provides two major options: to evaluate standby system unavailability or, in addition to the unavailability to calculate the total system failure probability by including both the unavailability of the system on demand as well as the probability that it will operate for an arbitrary time period following the demand. The FRANTIC-3 time dependent reliability models provide a large selection of repair and testingmore » policies applicable to standby or continously operating systems consisting of periodically tested, monitored, and non-repairable (non-testable) components. Time-dependent and test frequency dependent failures, as well as demand stress related failure, test-caused degradation and wear-out, test associated human errors, test deficiencies, test override, unscheduled and scheduled maintenance, component renewal and replacement policies, and test strategies can be prescribed. The conditional system unavailabilities associated with the downtimes of the user specified failed component are also evaluated. Optionally, the code can perform a sensitivity study for system unavailability or total failure probability to the failure characteristics of the standby components.« less
Time-Dependent Reliability Analysis
Sartori, Enrico
1999-10-27
FRANTIC-3 was developed to evaluate system unreliability using time-dependent techniques. The code provides two major options: to evaluate standby system unavailability or, in addition to the unavailability to calculate the total system failure probability by including both the unavailability of the system on demand as well as the probability that it will operate for an arbitrary time period following the demand. The FRANTIC-3 time dependent reliability models provide a large selection of repair and testing policies applicable to standby or continously operating systems consisting of periodically tested, monitored, and non-repairable (non-testable) components. Time-dependent and test frequency dependent failures, as well as demand stress related failure, test-caused degradation and wear-out, test associated human errors, test deficiencies, test override, unscheduled and scheduled maintenance, component renewal and replacement policies, and test strategies can be prescribed. The conditional system unavailabilities associated with the downtimes of the user specified failed component are also evaluated. Optionally, the code can perform a sensitivity study for system unavailability or total failure probability to the failure characteristics of the standby components.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap.
A parametric approach for the estimation of the instantaneous speed of rotating machinery
NASA Astrophysics Data System (ADS)
Rodopoulos, Konstantinos; Yiakopoulos, Christos; Antoniadis, Ioannis
2014-02-01
A parametric method is proposed for the estimation of the instantaneous speed of rotating machines. The method belongs typically to the class of eigenvalue based parametric signal processing methods. The major advantage of parametric methods over frequency domain or time-frequency domain based methods, is their increased resolution and their reduced computational cost. Moreover, advantages of eigenvalue based methods over other parametric methods include their robustness to noise. Sensitivity analysis for the key parameters of the proposed method is performed, including the sampling frequency, the signal length and the robustness to noise. The effectiveness of the method is demonstrated in vibration measurements from a test rig during start-up and run-down, as well as during variations of the speed of a motorcycle engine. Compared to the Hilbert Transform and to the Discrete Energy Separation Algorithm (DESA), the proposed approach exhibits a better behavior, while it simultaneously presents computational simplicity, being able to be implemented analytically, even online.
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
Experience with parametric binary dissection
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Parametric Binary Dissection (PBD) is a new algorithm that can be used for partitioning graphs embedded in 2- or 3-dimensional space. It partitions explicitly on the basis of nodes + (lambda)x(edges cut), where lambda is the ratio of time to communicate over an edge to the time to compute at a node. The new algorithm is faster than the original binary dissection algorithm and attempts to obtain better partitions than the older algorithm, which only takes nodes into account. The performance of parametric dissection with plain binary dissection on 3 large unstructured 3-d meshes obtained from computational fluid dynamics and on 2 random graphs were compared. It was showm that the new algorithm can usually yield partitions that are substantially superior, but that its performance is heavily dependent on the input data.
NASA Astrophysics Data System (ADS)
Rawles, Christopher; Thurber, Clifford
2015-08-01
We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set
Stick balancing with reflex delay in case of parametric forcing
NASA Astrophysics Data System (ADS)
Insperger, Tamas
2011-04-01
The effect of parametric forcing on a PD control of an inverted pendulum is analyzed in the presence of feedback delay. The stability of the time-periodic and time-delayed system is determined numerically using the first-order semi-discretization method in the 5-dimensional parameter space of the pendulum's length, the forcing frequency, the forcing amplitude, the proportional and the differential gains. It is shown that the critical length of the pendulum (that can just be balanced against the time-delay) can significantly be decreased by parametric forcing even if the maximum forcing acceleration is limited. The numerical analysis showed that the critical stick length about 30 cm corresponding to the unforced system with reflex delay 0.1 s can be decreased to 18 cm with keeping maximum acceleration below the gravitational acceleration.
Time: A Critical Analysis for Children.
ERIC Educational Resources Information Center
Shibles, Warren
One in a series of books intended to help children improve their critical analytical skills, the document examines the concept of time in light of theories of philosophers Ludwig Wittgenstein and John Dewey. The document is presented in three chapters. Chapter one offers a critical analysis of time from the perspective that time is an unknown and…
Geometric analysis of minimum-time trajectories for a two-level quantum system
NASA Astrophysics Data System (ADS)
Romano, Raffaele
2014-12-01
We consider the problem of controlling in minimum time a two-level quantum system which can be subject to a drift. The control is assumed to be bounded in magnitude and to affect two or three independent generators of the dynamics. We describe the time optimal trajectories in SU(2), the Lie group of possible evolutions for the system, by means of a particularly simple parametrization of the group. A key ingredient of our analysis is the introduction of the optimal front line. This tool allows us to fully characterize the time evolution of the reachable sets and to derive the worst-case operators and the corresponding times. The analysis is performed in any regime—controlled dynamics stronger than, of the same magnitude as, or weaker than the drift term—and gives a method to synthesize quantum logic operations on a two-level system in minimum time.
Frequency domain optical parametric amplification
NASA Astrophysics Data System (ADS)
Schmidt, Bruno E.; Thiré, Nicolas; Boivin, Maxime; Laramée, Antoine; Poitras, François; Lebrun, Guy; Ozaki, Tsuneyuki; Ibrahim, Heide; Légaré, François
2014-05-01
Today’s ultrafast lasers operate at the physical limits of optical materials to reach extreme performances. Amplification of single-cycle laser pulses with their corresponding octave-spanning spectra still remains a formidable challenge since the universal dilemma of gain narrowing sets limits for both real level pumped amplifiers as well as parametric amplifiers. We demonstrate that employing parametric amplification in the frequency domain rather than in time domain opens up new design opportunities for ultrafast laser science, with the potential to generate single-cycle multi-terawatt pulses. Fundamental restrictions arising from phase mismatch and damage threshold of nonlinear laser crystals are not only circumvented but also exploited to produce a synergy between increased seed spectrum and increased pump energy. This concept was successfully demonstrated by generating carrier envelope phase stable, 1.43 mJ two-cycle pulses at 1.8 μm wavelength.
Frequency domain optical parametric amplification
Schmidt, Bruno E.; Thiré, Nicolas; Boivin, Maxime; Laramée, Antoine; Poitras, François; Lebrun, Guy; Ozaki, Tsuneyuki; Ibrahim, Heide; Légaré, François
2014-01-01
Today’s ultrafast lasers operate at the physical limits of optical materials to reach extreme performances. Amplification of single-cycle laser pulses with their corresponding octave-spanning spectra still remains a formidable challenge since the universal dilemma of gain narrowing sets limits for both real level pumped amplifiers as well as parametric amplifiers. We demonstrate that employing parametric amplification in the frequency domain rather than in time domain opens up new design opportunities for ultrafast laser science, with the potential to generate single-cycle multi-terawatt pulses. Fundamental restrictions arising from phase mismatch and damage threshold of nonlinear laser crystals are not only circumvented but also exploited to produce a synergy between increased seed spectrum and increased pump energy. This concept was successfully demonstrated by generating carrier envelope phase stable, 1.43 mJ two-cycle pulses at 1.8 μm wavelength. PMID:24805968
Kumar, Ajay; Singhal, R K; Preetha, J; Rupali, K; Joshi, V M; Hegde, A G; Kushwaha, H S
2007-01-01
During this study, non-parametric statistical methods were used to validate the measured gamma dose rate with the calculated one, around Trombay. Portable dose rate digital gamma spectrometry system (target fieldSPEC) was used for in situ measurement of external gamma (gamma) dose rate (measured) with the range of 1 nSv/h-10 Sv/h. The activity concentration of U-238, Th-232, K-40 and Cs-137 in the soil and their respective external dose-conversion factor (nSv/h/Bq/kg) was used to evaluate the gamma dose rate (calculated). Non-parametric statistical tool like Box- and -Whisker Plot, Spearman's (rho) rank Correlation coefficient, the Wilcoxon/Mann-Whitney test and chi(2) distribution test have been applied for validation. The randomness or discrete behaviour of measured and calculated dose rate was obvious from the Box- and -Whisker Plot as mean and median of the two are not equal. The inter quartile range (Q3-Q1), which explains about the dispersion of measured and calculated dose rate were also evaluated and found to be 10 and 16 microSv/y, respectively. The linear association between the order of ranks of the two dose rates was established by using Spearman's (rho) rank correlation that showed a coefficient of R = +0.90 with the intercept +1.9, whereas Pearson's correlation was observed with a coefficient of R = +0.93 with the intercept -25.6. Wilcoxon/Mann-Whitney test shows that, medians of the calculated and the measured dose rate as significantly different under the assumption of null hypothesis and measured dose rate was made to the normal distribution by applying Z-statistics. Value of chi(2) was calculated and found to be 284.95, which was very much greater than the critical value of chi(2)(0.05) = 43.77 at a degree of freedom 30, concluding that there is a highly significant difference between the measured and calculated dose rate at 5% significance level. PMID:17545658
Nonlinear Analysis of Surface EMG Time Series
NASA Astrophysics Data System (ADS)
Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-04-01
Applications of nonlinear analysis of surface electromyography time series of patients with and without low back pain are presented. Limitations of the standard methods based on the power spectrum are discussed.
Parametric spatiotemporal oscillation in reaction-diffusion systems.
Ghosh, Shyamolina; Ray, Deb Shankar
2016-03-01
We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions.
Parametric spatiotemporal oscillation in reaction-diffusion systems.
Ghosh, Shyamolina; Ray, Deb Shankar
2016-03-01
We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions. PMID:27078346
Nonlinear Time Series Analysis of Sunspot Data
NASA Astrophysics Data System (ADS)
Suyal, Vinita; Prasad, Awadhesh; Singh, Harinder P.
2009-12-01
This article deals with the analysis of sunspot number time series using the Hurst exponent. We use the rescaled range ( R/ S) analysis to estimate the Hurst exponent for 259-year and 11 360-year sunspot data. The results show a varying degree of persistence over shorter and longer time scales corresponding to distinct values of the Hurst exponent. We explain the presence of these multiple Hurst exponents by their resemblance to the deterministic chaotic attractors having multiple centers of rotation.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Fractal analysis of time varying data
Vo-Dinh, Tuan; Sadana, Ajit
2002-01-01
Characteristics of time varying data, such as an electrical signal, are analyzed by converting the data from a temporal domain into a spatial domain pattern. Fractal analysis is performed on the spatial domain pattern, thereby producing a fractal dimension D.sub.F. The fractal dimension indicates the regularity of the time varying data.
Analysis of soybean flowering-time genes
Technology Transfer Automated Retrieval System (TEKTRAN)
Control of soybean flowering time is important for geographic adaptation, and maximizing yield. RT-PCR analysis was performed using primers synthesized for a number of putative flowering-time genes based on homology of soybean EST and genomic sequences to Arabidopsis genes. RNA for cDNA synthesis ...
Brain Signal Variability is Parametrically Modifiable
Garrett, Douglas D.; McIntosh, Anthony R.; Grady, Cheryl L.
2014-01-01
Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. PMID:23749875
NASA Astrophysics Data System (ADS)
Mouzourides, P.; Kyprianou, A.; Neophytou, M. K.-A.
2013-12-01
Urban morphology characterization is crucial for the parametrization of boundary-layer development over urban areas. One complexity in such a characterization is the three-dimensional variation of the urban canopies and textures, which are customarily reduced to and represented by one-dimensional varying parametrization such as the aerodynamic roughness length and zero-plane displacement . The scope of the paper is to provide novel means for a scale-adaptive spatially-varying parametrization of the boundary layer by addressing this 3-D variation. Specifically, the 3-D variation of urban geometries often poses questions in the multi-scale modelling of air pollution dispersion and other climate or weather-related modelling applications that have not been addressed yet, such as: (a) how we represent urban attributes (parameters) appropriately for the multi-scale nature and multi-resolution basis of weather numerical models, (b) how we quantify the uniqueness of an urban database in the context of modelling urban effects in large-scale weather numerical models, and (c) how we derive the impact and influence of a particular building in pre-specified sub-domain areas of the urban database. We illustrate how multi-resolution analysis (MRA) addresses and answers the afore-mentioned questions by taking as an example the Central Business District of Oklahoma City. The selection of MRA is motivated by its capacity for multi-scale sampling; in the MRA the "urban" signal depicting a city is decomposed into an approximation, a representation at a higher scale, and a detail, the part removed at lower scales to yield the approximation. Different levels of approximations were deduced for the building height and planar packing density . A spatially-varying characterization with a scale-adaptive capacity is obtained for the boundary-layer parameters (aerodynamic roughness length and zero-plane displacement ) using the MRA-deduced results for the building height and the planar packing
Sensitivity to censored-at-random assumption in the analysis of time-to-event endpoints.
Lipkovich, Ilya; Ratitch, Bohdana; O'Kelly, Michael
2016-05-01
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time-to-event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time-to-event outcomes could be extended in a clinically meaningful and interpretable way to stress-test the assumption of ignorable censoring. We focus on a 'tipping point' approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi-parametric, and non-parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time-to-event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.
Sensitivity to censored-at-random assumption in the analysis of time-to-event endpoints.
Lipkovich, Ilya; Ratitch, Bohdana; O'Kelly, Michael
2016-05-01
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time-to-event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time-to-event outcomes could be extended in a clinically meaningful and interpretable way to stress-test the assumption of ignorable censoring. We focus on a 'tipping point' approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi-parametric, and non-parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time-to-event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26997353
Multifractal Analysis of Sunspot Number Time Series
NASA Astrophysics Data System (ADS)
Kasde, Satish Kumar; Gwal, Ashok Kumar; Sondhiya, Deepak Kumar
2016-07-01
Multifractal analysis based approaches have been recently developed as an alternative framework to study the complex dynamical fluctuations in sunspot numbers data including solar cycles 20 to 23 and ascending phase of current solar cycle 24.To reveal the multifractal nature, the time series data of monthly sunspot number are analyzed by singularity spectrum and multi resolution wavelet analysis. Generally, the multifractility in sunspot number generate turbulence with the typical characteristics of the anomalous process governing the magnetosphere and interior of Sun. our analysis shows that singularities spectrum of sunspot data shows well Gaussian shape spectrum, which clearly establishes the fact that monthly sunspot number has multifractal character. The multifractal analysis is able to provide a local and adaptive description of the cyclic components of sunspot number time series, which are non-stationary and result of nonlinear processes. Keywords: Sunspot Numbers, Magnetic field, Multifractal analysis and wavelet Transform Techniques.
NASA Astrophysics Data System (ADS)
Dhote, Yogesh; Thombre, Shashikant
2016-05-01
This paper presents the thermal performance of the proposed double flow natural convection solar air heater with in-built liquid (oil) sensible heat storage. Unused engine oil was used as thermal energy storage medium due to its good heat retaining capacity even at high temperatures without evaporation. The performance evaluation was carried out for a day of the month March for the climatic conditions of Nagpur (India). A self reliant computational model was developed using computational tool as C++. The program developed was self reliant and compute the performance parameters for any day of the year and would be used for major cities in India. The effect of change in storage oil quantity and the inclination (tilt angle) on the overall efficiency of the solar air heater was studied. The performance was tested initially at different storage oil quantities as 25, 50, 75 and 100 l for a plate spacing of 0.04 m with an inclination of 36o. It has been found that the solar air heater gives the best performance at a storage oil quantity of 50 l. The performance of the proposed solar air heater is further tested for various combinations of storage oil quantity (50, 75 and 100 l) and the inclination (0o, 15o, 30o, 45o, 60o, 75o, 90o). It has been found that the proposed solar air heater with in-built oil storage shows its best performance for the combination of 50 l storage oil quantity and 60o inclination. Finally the results of the parametric study was also presented in the form of graphs carried out for a fixed storage oil quantity of 25 l, plate spacing of 0.03 m and at an inclination of 36o to study the behaviour of various heat transfer and fluid flow parameters of the solar air heater.
NASA Astrophysics Data System (ADS)
Rezaee, Mousa; Jahangiri, Reza
2015-05-01
In this study, in the presence of supersonic aerodynamic loading, the nonlinear and chaotic vibrations and stability of a simply supported Functionally Graded Piezoelectric (FGP) rectangular plate with bonded piezoelectric layer have been investigated. It is assumed that the plate is simultaneously exposed to the effects of harmonic uniaxial in-plane force and transverse piezoelectric excitations and aerodynamic loading. It is considered that the potential distribution varies linearly through the piezoelectric layer thickness, and the aerodynamic load is modeled by the first order piston theory. The von-Karman nonlinear strain-displacement relations are used to consider the geometrical nonlinearity. Based on the Classical Plate Theory (CPT) and applying the Hamilton's principle, the nonlinear coupled partial differential equations of motion are derived. The Galerkin's procedure is used to reduce the equations of motion to nonlinear ordinary differential Mathieu equations. The validity of the formulation for analyzing the Limit Cycle Oscillation (LCO), aero-elastic stability boundaries is accomplished by comparing the results with those of the literature, and the convergence study of the FGP plate is performed. By applying the Multiple Scales Method, the case of 1:2 internal resonance and primary parametric resonance are taken into account and the corresponding averaged equations are derived and analyzed numerically. The results are provided to investigate the effects of the forcing/piezoelectric detuning parameter, amplitude of forcing/piezoelectric excitation and dynamic pressure, on the nonlinear dynamics and chaotic behavior of the FGP plate. It is revealed that under the certain conditions, due to the existence of bi-stable region of non-trivial solutions, system shows the hysteretic behavior. Moreover, in absence of airflow, it is observed that variation of control parameters leads to the multi periodic and chaotic motions.
Multiscale InSAR Time Series (MInTS) analysis of surface deformation
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, Y. N.; Agram, P. S.; DiCaprio, C. J.
2011-12-01
We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, such that coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least-squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
Multiscale InSAR Time Series (MInTS) analysis of surface deformation
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Musé, P.; Simons, M.; Lin, Y. N.; Agram, P. S.; Dicaprio, C. J.
2012-02-01
We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
Entropic Analysis of Electromyography Time Series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Sung, Paul
2005-03-01
We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.
Parametric Mapping of Contrasted Ovarian Transvaginal Sonography
Korhonen, Katrina; Moore, Ryan; Lyshchik, Andrej; Fleischer, Arthur C.
2014-01-01
The purpose of this study was to assess the accuracy of parametric analysis of transvaginal contrast-enhanced ultrasound (TV-CEUS) for distinguishing benign versus malignant ovarian masses. A total of 48 ovarian masses (37 benign and 11 borderline/malignant) were examined with TV-CEUS (Definity, Lantheus, North Bilreca, MA; Philips iU22, Bothell, WA). Parametric images were created offline with a quantification software (Bracco Suisse SA, Geneva, Switzerland) with map color scales adjusted such that abnormal hemodynamics were represented by the color red and the presence of any red color could be used to differentiate benign and malignant tumors. Using these map color scales, low values of the perfusion parameter were coded in blue, and intermediate values of the perfusion parameter were coded in yellow. Additionally, for each individual color (red, blue, or yellow), a darker shade of that color indicated a higher intensity value. Our study found that the parametric mapping method was considerably more sensitive than standard ROI analysis for the detection of malignant tumors but was also less specific than standard ROI analysis. Parametric mapping allows for stricter cut-off criteria, as hemodynamics are visualized on a finer scale than ROI analyses, and as such, parametric maps are a useful addition to TV-CEUS analysis by allowing ROIs to be limited to areas of highest malignant potential. PMID:26002525
NASA Astrophysics Data System (ADS)
Sardina, V.; Weinstein, S.
2013-12-01
We extracted parametric data such as message delay from the earthquakes' origin time and magnitude offsets from the Global Centroid Moment Tensor (GCMT) magnitudes by parsing 107 observatory messages issued by the US Tsunami Warning Centers (USTWCs) during the first half of 2013. We found an apparent correlation between the messages' delay times and the number of picks used either in the estimation of the average magnitudes or the epicenters' location. Paradoxically, waiting to include more picks in the magnitude estimates did not significantly improve either their precision or the accuracy of the location parameters. On average, during the first half of 2013 both US tsunami warning centers sent their observatory messages at about five minutes and fifty seconds (05:50) from origin time with an average magnitude offset of 0.16 magnitude units from the GCMT-based moment magnitudes. Under normal operational conditions, and with the present level of seismic stations' density, differences in the messages' delay time appear correlated with whether or not the scientists on duty wait for more than three magnitude picks before sending out the first observatory message. As a rule, the USTWCs could potentially send their observatory messages up to a whole minute faster by simply not waiting for more than five station averages when estimating the preliminary magnitudes. Doing so does not appear to result in a significant loss of either seismic location or magnitude accuracy, particularly for tsunami warning purposes.
Synchronization Analysis of Nonstationary Bivariate Time Series
NASA Astrophysics Data System (ADS)
Kurths, J.
First the concept of synchronization in coupled complex systems is presented and it is shown that synchronization phenomena are abundant in science, nature, engineer- ing etc. We use this concept to treat the inverse problem and to reveal interactions between oscillating systems from observational data. First it is discussed how time varying phases and frequencies can be estimated from time series and second tech- niques for detection and quantification of hidden synchronization is presented. We demonstrate that this technique is effective for the analysis of systems' interrelation from noisy nonstationary bivariate data and provides other insights than traditional cross correlation and spectral analysis. For this, model examples and geophysical data are discussed.
Real time analysis of voiced sounds
NASA Technical Reports Server (NTRS)
Hong, J. P. (Inventor)
1976-01-01
A power spectrum analysis of the harmonic content of a voiced sound signal is conducted in real time by phase-lock-loop tracking of the fundamental frequency, (f sub 0) of the signal and successive harmonics (h sub 1 through h sub n) of the fundamental frequency. The analysis also includes measuring the quadrature power and phase of each frequency tracked, differentiating the power measurements of the harmonics in adjacent pairs, and analyzing successive differentials to determine peak power points in the power spectrum for display or use in analysis of voiced sound, such as for voice recognition.
Analysis of real-time vibration data
Safak, E.
2005-01-01
In recent years, a few structures have been instrumented to provide continuous vibration data in real time, recording not only large-amplitude motions generated by extreme loads, but also small-amplitude motions generated by ambient loads. The main objective in continuous recording is to track any changes in structural characteristics, and to detect damage after an extreme event, such as an earthquake or explosion. The Fourier-based spectral analysis methods have been the primary tool to analyze vibration data from structures. In general, such methods do not work well for real-time data, because real-time data are mainly composed of ambient vibrations with very low amplitudes and signal-to-noise ratios. The long duration, linearity, and the stationarity of ambient data, however, allow us to utilize statistical signal processing tools, which can compensate for the adverse effects of low amplitudes and high noise. The analysis of real-time data requires tools and techniques that can be applied in real-time; i.e., data are processed and analyzed while being acquired. This paper presents some of the basic tools and techniques for processing and analyzing real-time vibration data. The topics discussed include utilization of running time windows, tracking mean and mean-square values, filtering, system identification, and damage detection.
Effective Analysis of Reaction Time Data
ERIC Educational Resources Information Center
Whelan, Robert
2008-01-01
Most analyses of reaction time (RT) data are conducted by using the statistical techniques with which psychologists are most familiar, such as analysis of variance on the sample mean. Unfortunately, these methods are usually inappropriate for RT data, because they have little power to detect genuine differences in RT between conditions. In…
Nonlinear time-series analysis revisited.
Bradley, Elizabeth; Kantz, Holger
2015-09-01
In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data-typically univariate-via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems. PMID:26428563
Nonlinear time-series analysis revisited
NASA Astrophysics Data System (ADS)
Bradley, Elizabeth; Kantz, Holger
2015-09-01
In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data—typically univariate—via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems.
Nonlinear Time Series Analysis via Neural Networks
NASA Astrophysics Data System (ADS)
Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin
This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.
Piccard: Pulsar timing data analysis package
NASA Astrophysics Data System (ADS)
van Haasteren, Rutger
2016-10-01
Piccard is a Bayesian-inference pipeline for Pulsar Timing Array (PTA) data and interacts with Tempo2 (ascl:1210.015) through libstempo. The code is use mainly for single-pulsar analysis and gravitational-wave detection purposes of full Pulsar Timing Array datasets. Modeling of the data can include correlated signals per frequency or modeled spectrum, with uniform, dipolar, quadrupolar, or anisotropic correlations; multiple error bars and EFACs per pulsar; and white and red noise. Timing models can be numerically included, either by using the design matrix (linear timing model), or by calling libstempo for the full non-linear timing model. Many types of samplers are included. For common-mode mitigation, the signals can be reconstructed mitigating arbitrary signals simultaneously.
Haar Wavelet Analysis of Climatic Time Series
NASA Astrophysics Data System (ADS)
Zhang, Zhihua; Moore, John; Grinsted, Aslak
2014-05-01
In order to extract the intrinsic information of climatic time series from background red noise, we will first give an analytic formula on the distribution of Haar wavelet power spectra of red noise in a rigorous statistical framework. The relation between scale aand Fourier period T for the Morlet wavelet is a= 0.97T . However, for Haar wavelet, the corresponding formula is a= 0.37T . Since for any time series of time step δt and total length Nδt, the range of scales is from the smallest resolvable scale 2δt to the largest scale Nδt in wavelet-based time series analysis, by using the Haar wavelet analysis, one can extract more low frequency intrinsic information. Finally, we use our method to analyze Arctic Oscillation which is a key aspect of climate variability in the Northern Hemisphere, and discover a great change in fundamental properties of the AO,-commonly called a regime shift or tripping point. Our partial results have been published as follows: [1] Z. Zhang, J.C. Moore and A. Grinsted, Haar wavelet analysis of climatic time series, Int. J. Wavelets, Multiresol. & Inf. Process., in press, 2013 [2] Z. Zhang, J.C. Moore, Comment on "Significance tests for the wavelet power and the wavelet power spectrum", Ann. Geophys., 30:12, 2012
Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging
NASA Technical Reports Server (NTRS)
Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William; Ehgamberdiev, Shuhrat; Landenkov, Oleg
1999-01-01
Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.
Visibility Graph Based Time Series Analysis.
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.
Visibility Graph Based Time Series Analysis
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115
Optimization of noncollinear optical parametric amplification
NASA Astrophysics Data System (ADS)
Schimpf, D. N.; Rothardt, J.; Limpert, J.; Tünnermann, A.
2007-02-01
Noncollinearly phase-matched optical parametric amplifiers (NOPAs) - pumped with the green light of a frequency doubled Yb-doped fiber-amplifier system 1, 2 - permit convenient generation of ultrashort pulses in the visible (VIS) and near infrared (NIR) 3. The broad bandwidth of the parametric gain via the noncollinear pump configuration allows amplification of few-cycle optical pulses when seeded with a spectrally flat, re-compressible signal. The short pulses tunable over a wide region in the visible permit transcend of frontiers in physics and lifescience. For instance, the resulting high temporal resolution is of significance for many spectroscopic techniques. Furthermore, the high magnitudes of the peak-powers of the produced pulses allow research in high-field physics. To understand the demands of noncollinear optical parametric amplification using a fiber pump source, it is important to investigate this configuration in detail 4. An analysis provides not only insight into the parametric process but also determines an optimal choice of experimental parameters for the objective. Here, the intention is to design a configuration which yields the shortest possible temporal pulse. As a consequence of this analysis, the experimental setup could be optimized. A number of aspects of optical parametric amplifier performance have been treated analytically and computationally 5, but these do not fully cover the situation under consideration here.
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Timing analysis of PWR fuel pin failures
Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J. ); Straka, M. )
1992-09-01
This report discusses research conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin burnup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PF1/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design burnup. Using peaking factors commensurate with actual burnups would result in longer intervals for both reactor designs. This document provides appendices K and L of this report which provide plots for the timing analysis of PWR fuel pin failures for Oconee and Seabrook respectively.
Wavelet analysis of radon time series
NASA Astrophysics Data System (ADS)
Barbosa, Susana; Pereira, Alcides; Neves, Luis
2013-04-01
Radon is a radioactive noble gas with a half-life of 3.8 days ubiquitous in both natural and indoor environments. Being produced in uranium-bearing materials by decay from radium, radon can be easily and accurately measured by nuclear methods, making it an ideal proxy for time-varying geophysical processes. Radon time series exhibit a complex temporal structure and large variability on multiple scales. Wavelets are therefore particularly suitable for the analysis on a scale-by-scale basis of time series of radon concentrations. In this study continuous and discrete wavelet analysis is applied to describe the variability structure of hourly radon time series acquired both indoors and on a granite site in central Portugal. A multi-resolution decomposition is performed for extraction of sub-series associated to specific scales. The high-frequency components are modeled in terms of stationary autoregressive / moving average (ARMA) processes. The amplitude and phase of the periodic components are estimated and tidal features of the signals are assessed. Residual radon concentrations (after removal of periodic components) are further examined and the wavelet spectrum is used for estimation of the corresponding Hurst exponent. The results for the several radon time series considered in the present study are very heterogeneous in terms of both high-frequency and long-term temporal structure indicating that radon concentrations are very site-specific and heavily influenced by local factors.
NASA Astrophysics Data System (ADS)
Wang, Wenqin; Nguang, Sing Kiong; Zhong, Shouming; Liu, Feng
2014-05-01
This study examines the problem of robust stability of uncertain stochastic genetic regulatory networks with time-varying delays. The system's uncertainties are modeled as both polytopic form and structured linear fractional form. Based on a novel augmented Lyapunov-Krasovskii functional and different integral approaches, new stability conditions have been derived. Furthermore, these stability criteria can be applicable to both fast and slow time-varying delays. Finally, a numerical example is presented to illustrate the effectiveness of the proposed stability conditions.
Time-Series Analysis: A Cautionary Tale
NASA Technical Reports Server (NTRS)
Damadeo, Robert
2015-01-01
Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.
Lacitignola, Deborah; Saccomandi, Giuseppe
2014-03-01
We consider a simple mesoscopic model of DNA in which the binding of the RNA polymerase enzyme molecule to the promoter sequence of the DNA is included through a substrate energy term modeling the enzymatic interaction with the DNA strands. We focus on the differential system for solitary waves and derive conditions--in terms of the model parameters--for the occurrence of the parametric resonance phenomenon. We find that what truly matters for parametric resonance is not the ratio between the strength of the stacking and the inter-strand forces but the ratio between the substrate and the inter-strands. On the basis of these results, the standard objection that longitudinal motion is negligible because of the second order seems to fail, suggesting that all the studies involving the longitudinal degree of freedom in DNA should be reconsidered when the interaction of the RNA polymerase with the DNA macromolecule is not neglected. PMID:24510728
Parametric Explosion Spectral Model
Ford, S R; Walter, W R
2012-01-19
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.
Ultra-Broad-Band Optical Parametric Amplifier or Oscillator
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry; Matsko, Andrey; Savchenkov, Anatolly; Maleki, Lute
2009-01-01
A concept for an ultra-broad-band optical parametric amplifier or oscillator has emerged as a by-product of a theoretical study in fundamental quantum optics. The study was originally intended to address the question of whether the two-photon temporal correlation function of light [in particular, light produced by spontaneous parametric down conversion (SPDC)] can be considerably narrower than the inverse of the spectral width (bandwidth) of the light. The answer to the question was found to be negative. More specifically, on the basis of the universal integral relations between the quantum two-photon temporal correlation and the classical spectrum of light, it was found that the lower limit of two-photon correlation time is set approximately by the inverse of the bandwidth. The mathematical solution for the minimum two-photon correlation time also provides the minimum relative frequency dispersion of the down-converted light components; in turn, the minimum relative frequency dispersion translates to the maximum bandwidth, which is important for the design of an ultra-broad-band optical parametric oscillator or amplifier. In the study, results of an analysis of the general integral relations were applied in the case of an optically nonlinear, frequency-dispersive crystal in which SPDC produces collinear photons. Equations were found for the crystal orientation and pump wavelength, specific for each parametric-down-converting crystal, that eliminate the relative frequency dispersion of collinear degenerate (equal-frequency) signal and idler components up to the fourth order in the frequency-detuning parameter
The report gives results of a recent analysis showing that cost- effective indoor radon reduction technology is required for houses with initial radon concentrations < 4 pCi/L, because 78-86% of the national lung cancer risk due to radon is associated with those houses. ctive soi...
NASA Astrophysics Data System (ADS)
Miyazaki, Ryoichi; Saruwatari, Hiroshi; Shikano, Kiyohiro
We propose a structure-generalized blind spatial subtraction array (BSSA), and the theoretical analysis of the amounts of musical noise and speech distortion. The structure of BSSA should be selected according to the application, i.e., a channelwise BSSA is recommended for listening but a conventional BSSA is suitable for speech recognition.
Natural Time Analysis and Complex Networks
NASA Astrophysics Data System (ADS)
Sarlis, Nicholas; Skordas, Efthimios; Lazaridou, Mary; Varotsos, Panayiotis
2013-04-01
Here, we review the analysis of complex time series in a new time domain, termed natural time, introduced by our group [1,2]. This analysis conforms to the desire to reduce uncertainty and extract signal information as much as possible [3]. It enables [4] the distinction between the two origins of self-similarity when analyzing data from complex systems, i.e., whether self-similarity solely results from long-range temporal correlations (the process's memory only) or solely from the process's increments infinite variance (heavy tails in their distribution). Natural time analysis captures the dynamical evolution of a complex system and identifies [5] when the system enters a critical stage. Hence, this analysis plays a key role in predicting forthcoming catastrophic events in general. Relevant examples, compiled in a recent monograph [6], have been presented in diverse fields, including Solid State Physics [7], Statistical Physics (for example systems exhibiting self-organized criticality [8]), Cardiology [9,10], Earth Sciences [11] (Geophysics, Seismology), Environmental Sciences (e.g. see Ref. [12]), etc. Other groups have proposed and developed a network approach to earthquake events with encouraging results. A recent study [13] reveals that this approach is strengthened if we combine it with natural time analysis. In particular, we find [13,14] that the study of the spatial distribution of the variability [15] of the order parameter fluctuations, defined in natural time, provides important information on the dynamical evolution of the system. 1. P. Varotsos, N. Sarlis, and E. Skordas, Practica of Athens Academy, 76, 294-321, 2001. 2. P.A. Varotsos, N.V. Sarlis, and E.S. Skordas, Phys. Rev. E, 66, 011902 , 2002. 3. S. Abe, N.V. Sarlis, E.S. Skordas, H.K. Tanaka and P.A. Varotsos, Phys. Rev. Lett. 94, 170601, 2005. 4. P.A. Varotsos, N.V. Sarlis, E.S. Skordas, H.K. Tanaka and M.S. Lazaridou, Phys. Rev. E, 74, 021123, 2006. 5. P.Varotsos, N. V. Sarlis, E. S. Skordas
Cluster analysis of respiratory time series.
Adams, J M; Attinger, E O; Attinger, F M
1978-03-01
We have investigated the respiratory control system with the hypothesis that, although many variables such as minute ventilation (VI), tidal volume (VT), breathing period (TT), inspiratory duration (TI), and expiratory duration (TE) may be observed, the controller functions more simply by manipulating only 2 or 3 of these. Thus, if tidal volume is the only independent variable, TI being determined by the "off-switch" threshold, these variables should have very similar time courses. Anesthetized dogs were subjected to CO2 breathing and carotid sinus perfusion to stimulate both chemoreceptors. The time series of the variables VI, VT, TT, TE, and TI as well as PACO2 were determined on a breath by breath basis. Derived characteristics of these time series were compared using Cluster Analysis and the latent dimensionality of respiratory control determined by Factor Analysis. The characteristics of the time series clustered into 4 groups: magnitude (of the response), speed, variability and relative change. One respiratory factor accounted for 86% of the variance for the variability characteristics, 2 factors for magnitude (91%) and relative change (85%) and 3 factors for speed (89%). The respiratory variables were analysed for each of the 4 groups of characteristics with the following results: VT and TI clustered together only for the magnitude and relative change characteristics where as TT and TE clustered closely for all four characteristics. One latent factor was associated with the [TT-TE] group and the other usually with PACO2.
Climate Time Series Analysis and Forecasting
NASA Astrophysics Data System (ADS)
Young, P. C.; Fildes, R.
2009-04-01
This paper will discuss various aspects of climate time series data analysis, modelling and forecasting being carried out at Lancaster. This will include state-dependent parameter, nonlinear, stochastic modelling of globally averaged atmospheric carbon dioxide; the computation of emission strategies based on modern control theory; and extrapolative time series benchmark forecasts of annual average temperature, both global and local. The key to the forecasting evaluation will be the iterative estimation of forecast error based on rolling origin comparisons, as recommended in the forecasting research literature. The presentation will conclude with with a comparison of the time series forecasts with forecasts produced from global circulation models and a discussion of the implications for climate modelling research.
Time fluctuation analysis of forest fire sequences
NASA Astrophysics Data System (ADS)
Vega Orozco, Carmen D.; Kanevski, Mikhaïl; Tonini, Marj; Golay, Jean; Pereira, Mário J. G.
2013-04-01
Forest fires are complex events involving both space and time fluctuations. Understanding of their dynamics and pattern distribution is of great importance in order to improve the resource allocation and support fire management actions at local and global levels. This study aims at characterizing the temporal fluctuations of forest fire sequences observed in Portugal, which is the country that holds the largest wildfire land dataset in Europe. This research applies several exploratory data analysis measures to 302,000 forest fires occurred from 1980 to 2007. The applied clustering measures are: Morisita clustering index, fractal and multifractal dimensions (box-counting), Ripley's K-function, Allan Factor, and variography. These algorithms enable a global time structural analysis describing the degree of clustering of a point pattern and defining whether the observed events occur randomly, in clusters or in a regular pattern. The considered methods are of general importance and can be used for other spatio-temporal events (i.e. crime, epidemiology, biodiversity, geomarketing, etc.). An important contribution of this research deals with the analysis and estimation of local measures of clustering that helps understanding their temporal structure. Each measure is described and executed for the raw data (forest fires geo-database) and results are compared to reference patterns generated under the null hypothesis of randomness (Poisson processes) embedded in the same time period of the raw data. This comparison enables estimating the degree of the deviation of the real data from a Poisson process. Generalizations to functional measures of these clustering methods, taking into account the phenomena, were also applied and adapted to detect time dependences in a measured variable (i.e. burned area). The time clustering of the raw data is compared several times with the Poisson processes at different thresholds of the measured function. Then, the clustering measure value
Class, gender and time poverty: a time-use analysis of British workers' free time resources.
Chatzitheochari, Stella; Arber, Sara
2012-09-01
Free time, that is, the time that remains at one's own discretion after conducting daily work and personal care activities, has been previously recognized as a 'primary good' and an important welfare resource that provides opportunities for participation in social life and leisure. However, recent years have witnessed an increasing preoccupation with the phenomenon of time poverty, drawing attention to the distribution of free time and its relationship to structural and family circumstances. In this article we propose a novel approach to the measurement of time poverty and document its occurrence amongst British workers. In line with previous literature, a conceptualization of time poverty as a relative lack of free time resources vis-à-vis other members of the community is adopted. However, unlike previous empirical studies, we investigate the differential configuration of time poverty on weekdays and weekend days, alongside indicators of the quality of free time, taking into account insights from theoretical and empirical work within the field of the sociology of time. Our analysis of the 2000 UK Time Use Survey highlights class and gender inequalities that have been missed by previous measurement approaches and demonstrates that, overall, working women experience multiple and more severe free time constraints, which may constitute an additional barrier for their leisure and social participation.
NASA Astrophysics Data System (ADS)
Ramesh, Azadeh; Glade, Thomas; Malet, Jean-Philippe
2010-09-01
The existence of a trend in hydrological and meteorological time series is detected by statistical tests. The trend analysis of hydrological and meteorological series is important to consider, because of the effects of global climate change. Parametric or non-parametric statistical tests can be used to decide whether there is a statistically significant trend. In this paper, first a homogeneity analysis was performed by using the non-parametric Bartlett test. Then, trend detection was estimated by using non-parametric Mann-Kendall test. The null hypothesis in the Mann-Kendall test is that the data are independent and randomly ordered. The result of Mann-Kendall test was compared with the parametric T-Test for finding the existence of trend. To reach this purpose, the significance of trends was analyzed on monthly data of Ubaye river in Barcelonnette watershed in southeast of France at an elevation of 1132 m (3717 ft) during the period from 1928 to 2009 bases with the nonparametric Mann-Kendall test and parametric T-Test for river discharge and for meteorological data. The result shows that a rainfall event does not necessarily have an immediate impact on discharge. Visual inspection suggests that the correlation between observations made at the same time point is not very strong. In the results of the trend tests the p-value of the discharge is slightly smaller than the p-value of the precipitation but it seems that in both there is no statistically significant trend. In statistical hypothesis testing, a test statistic is a numerical summary of a set of data that reduces the data to one or a small number of values that can be used to perform a hypothesis test. Statistical hypothesis testing is determined if there is a significant trend or not. Negative test statistics and MK test in both precipitation and discharge data indicate downward trends. As conclusion we can say extreme flood event during recent years is strongly depending on: 1) location of the city: It is
Time motion analysis of international kickboxing competition.
Ouergui, Ibrahim; Hssin, Nizar; Haddad, Monoem; Franchini, Emerson; Behm, David; Wong, Del P; Gmada, Nabil; Bouhlel, Ezzedine
2014-06-17
The objective of the study was to analyze the time structure of high-level kickboxing matches. A total of 45 combats from two male World Championships were monitored using a time motion analysis system. The combat time structure (i.e., high-intensity activity: HIA; low-intensity activity: LIA; and referee breaks or pauses) during competition and weight divisions was determined and compared. Results indicated that the time structures were HIA: 2.2± 1.2 s; LIA: 2.3± 0.8 s; pauses: 5.4± 4.3 s; and 3.4±1.2 s between two subsequent HIA. The fighting to non-fighting ratio was found to be 1:1. Moreover, the number of HIA and LIA and the time of LIA decreased in latter rounds (e.g., the average number of HIA were 27.1±7.1, 25.1±6.6 and 24.9±6.1 respectively for round1, 2 and 3), meanwhile the time and number of pauses increased (e.g., the average pause times were 12.8±11.4, 22.3±22.6 and 24.6±23.3s respectively for round1, 2 and 3). The activity times did not differ among weight categories. The present results confirm the intermittent nature of kickboxing competition and provide coaches with more information on how to structure training sessions to mimic the physical demands in competition.
A parametric analysis of waves propagating in a porous solid saturated by a three-phase fluid.
Santos, Juan E; Savioli, Gabriela B
2015-11-01
This paper presents an analysis of a model for the propagation of waves in a poroelastic solid saturated by a three-phase viscous, compressible fluid. The constitutive relations and the equations of motion are stated first. Then a plane wave analysis determines the phase velocities and attenuation coefficients of the four compressional waves and one shear wave that propagate in this type of medium. A procedure to compute the elastic constants in the constitutive relations is defined next. Assuming the knowledge of the shear modulus of the dry matrix, the other elastic constants in the stress-strain relations are determined by employing ideal gedanken experiments generalizing those of Biot's theory for single-phase fluids. These experiments yield expressions for the elastic constants in terms of the properties of the individual solid and fluids phases. Finally the phase velocities and attenuation coefficients of all waves are computed for a sample of Berea sandstone saturated by oil, gas, and water.
Parametric modelling of a knee joint prosthesis.
Khoo, L P; Goh, J C; Chow, S L
1993-01-01
This paper presents an approach for the establishment of a parametric model of knee joint prosthesis. Four different sizes of a commercial prosthesis are used as an example in the study. A reverse engineering technique was employed to reconstruct the prosthesis on CATIA, a CAD (computer aided design) system. Parametric models were established as a result of the analysis. Using the parametric model established and the knee data obtained from a clinical study on 21 pairs of cadaveric Asian knees, the development of a prototype prosthesis that suits a patient with a very small knee joint is presented. However, it was found that modification to certain parameters may be inevitable due to the uniqueness of the Asian knee. An avenue for rapid modelling and eventually economical production of a customized knee joint prosthesis for patients is proposed and discussed.
Time Series Analysis of SOLSTICE Measurements
NASA Astrophysics Data System (ADS)
Wen, G.; Cahalan, R. F.
2003-12-01
Solar radiation is the major energy source for the Earth's biosphere and atmospheric and ocean circulations. Variations of solar irradiance have been a major concern of scientists both in solar physics and atmospheric sciences. A number of missions have been carried out to monitor changes in total solar irradiance (TSI) [see Fröhlich and Lean, 1998 for review] and spectral solar irradiance (SSI) [e.g., SOLSTICE on UARS and VIRGO on SOHO]. Observations over a long time period reveal the connection between variations in solar irradiance and surface magnetic fields of the Sun [Lean1997]. This connection provides a guide to scientists in modeling solar irradiances [e.g., Fontenla et al., 1999; Krivova et al., 2003]. Solar spectral observations have now been made over a relatively long time period, allowing statistical analysis. This paper focuses on predictability of solar spectral irradiance using observed SSI from SOLSTICE . Analysis of predictability is based on nonlinear dynamics using an artificial neural network in a reconstructed phase space [Abarbanel et al., 1993]. In the analysis, we first examine the average mutual information of the observed time series and a delayed time series. The time delay that gives local minimum of mutual information is chosen as the time-delay for phase space reconstruction [Fraser and Swinney, 1986]. The embedding dimension of the reconstructed phase space is determined using the false neighbors and false strands method [Kennel and Abarbanel, 2002]. Subsequently, we use a multi-layer feed-forward network with back propagation scheme [e.g., Haykin, 1994] to model the time series. The predictability of solar irradiance as a function of wavelength is considered. References Abarbanel, H. D. I., R. Brown, J. J. Sidorowich, and L. Sh. Tsimring, Rev. Mod. Phys. 65, 1331, 1993. Fraser, A. M. and H. L. Swinney, Phys. Rev. 33A, 1134, 1986. Fontenla, J., O. R. White, P. Fox, E. H. Avrett and R. L. Kurucz, The Astrophysical Journal, 518, 480
Timing analysis of PWR fuel pin failures
Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J. ); Straka, M. )
1992-09-01
Research has been conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin bumup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PFL/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design bumup. Using peaking factors commensurate widi actual bumups would result in longer intervals for both reactor designs. This document also contains appendices A through J of this report.
Parametric instabilities in the LCGT arm cavity
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Uchiyama, T.; Miyoki, S.; Ohashi, M.; Kuroda, K.; Numata, K.
2008-07-01
We evaluated the parametric instabilities of LCGT (Japanese interferometric gravitational wave detector project) arm cavity. The number of unstable modes of LCGT is 10-times smaller than that of Advanced LIGO (USA). Since the strength of the instabilities of LCGT depends on the mirror curvature more weakly than that of Advanced LIGO, the requirement of the mirror curvature accuracy is easier to be achieved. The difference in the parametric instabilities between LCGT and Advanced LIGO is because of the thermal noise reduction methods (LCGT, cooling sapphire mirrors; Advanced LIGO, fused silica mirrors with larger laser beams), which are the main strategies of the projects. Elastic Q reduction by the barrel surface (0.2 mm thickness Ta2O5) coating is effective to suppress instabilities in the LCGT arm cavity. Therefore, the cryogenic interferometer is a smart solution for the parametric instabilities in addition to thermal noise and thermal lensing.
Time-resolved force distribution analysis
2013-01-01
Background Biomolecules or other complex macromolecules undergo conformational transitions upon exposure to an external perturbation such as ligand binding or mechanical force. To follow fluctuations in pairwise forces between atoms or residues during such conformational changes as observed in Molecular Dynamics (MD) simulations, we developed Time-Resolved Force Distribution Analysis (TRFDA). Results The implementation focuses on computational efficiency and low-memory usage and, along with the wide range of output options, makes possible time series analysis of pairwise forces variation in long MD simulations and for large molecular systems. It also provides an exact decomposition of pairwise forces resulting from 3- and 4-body potentials and a unified treatment of pairwise forces between atoms or residues. As a proof of concept, we present a stress analysis during unfolding of ubiquitin in a force-clamp MD simulation. Conclusions TRFDA can be used, among others, in tracking signal propagation at atomic level, for characterizing dynamical intermolecular interactions (e.g. protein-ligand during flexible docking), in development of force fields and for following stress distribution during conformational changes. PMID:24499624
Investigation of Bank Filtration in Gravel and Sand Aquifers using Time-Series Analysis
NASA Astrophysics Data System (ADS)
Vogt, T.; Hoehn, E.; Schneider, P.; Cirpka, O. A.
2009-04-01
Drinking-water wells in the vicinity of rivers may be influenced by infiltration of river water. In the context of drinking-water protection the decisive questions concern the fraction of river infiltrate in the pumped water and the residence time in the aquifer. For this purpose, tracer experiments may be performed. At larger rivers, however, such tests require the injection of large amounts of the tracer. As alternative to artificial-tracer tests, we present methods in which time series of electric conductivity and temperature are used for quantitative statements regarding mixing ratios and residence times. We recommend a multi-step approach consisting of: (1) a qualitative analysis of the time series, (2) the spectral determination of the seasonal temperature and conductivity signals, (3) a cross-correlation analysis, and (4) the non-parametric deconvolution of the time series. We apply these methods to two sites in the aquifer of the Thur valley in the Swiss Plateau. At sites without good connection between river and groundwater or where the river gains groundwater, the elaborate methods of time-series analysis are not applicable, but the time series indicate such conditions. At sites with continuous river-water infiltration, we can reconstruct the breakthrough curve of a tracer test without releasing an artificial tracer into the river.
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Analysis of Polyphonic Musical Time Series
NASA Astrophysics Data System (ADS)
Sommer, Katrin; Weihs, Claus
A general model for pitch tracking of polyphonic musical time series will be introduced. Based on a model of Davy and Godsill (Bayesian harmonic models for musical pitch estimation and analysis, Technical Report 431, Cambridge University Engineering Department, 2002) Davy and Godsill (2002) the different pitches of the musical sound are estimated with MCMC methods simultaneously. Additionally a preprocessing step is designed to improve the estimation of the fundamental frequencies (A comparative study on polyphonic musical time series using MCMC methods. In C. Preisach et al., editors, Data Analysis, Machine Learning, and Applications, Springer, Berlin, 2008). The preprocessing step compares real audio data with an alphabet constructed from the McGill Master Samples (Opolko and Wapnick, McGill University Master Samples [Compact disc], McGill University, Montreal, 1987) and consists of tones of different instruments. The tones with minimal Itakura-Saito distortion (Gray et al., Transactions on Acoustics, Speech, and Signal Processing ASSP-28(4):367-376, 1980) are chosen as first estimates and as starting points for the MCMC algorithms. Furthermore the implementation of the alphabet is an approach for the recognition of the instruments generating the musical time series. Results are presented for mixed monophonic data from McGill and for self recorded polyphonic audio data.
Radar Interferometry Time Series Analysis and Tools
NASA Astrophysics Data System (ADS)
Buckley, S. M.
2006-12-01
We consider the use of several multi-interferogram analysis techniques for identifying transient ground motions. Our approaches range from specialized InSAR processing for persistent scatterer and small baseline subset methods to the post-processing of geocoded displacement maps using a linear inversion-singular value decomposition solution procedure. To better understand these approaches, we have simulated sets of interferograms spanning several deformation phenomena, including localized subsidence bowls with constant velocity and seasonal deformation fluctuations. We will present results and insights from the application of these time series analysis techniques to several land subsidence study sites with varying deformation and environmental conditions, e.g., arid Phoenix and coastal Houston-Galveston metropolitan areas and rural Texas sink holes. We consistently find that the time invested in implementing, applying and comparing multiple InSAR time series approaches for a given study site is rewarded with a deeper understanding of the techniques and deformation phenomena. To this end, and with support from NSF, we are preparing a first-version of an InSAR post-processing toolkit to be released to the InSAR science community. These studies form a baseline of results to compare against the higher spatial and temporal sampling anticipated from TerraSAR-X as well as the trade-off between spatial coverage and resolution when relying on ScanSAR interferometry.
NO TIME FOR DEAD TIME: TIMING ANALYSIS OF BRIGHT BLACK HOLE BINARIES WITH NuSTAR
Bachetti, Matteo; Barret, Didier; Harrison, Fiona A.; Cook, Rick; Grefenstette, Brian W.; Fürst, Felix; Tomsick, John; Boggs, Steven E.; Craig, William W.; Schmid, Christian; Christensen, Finn E.; Fabian, Andrew C.; Kara, Erin; Gandhi, Poshak; Hailey, Charles J.; Maccarone, Thomas J.; Miller, Jon M.; Pottschmidt, Katja; Stern, Daniel; Uttley, Phil; and others
2015-02-20
Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (∼2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploit the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339–4, Cyg X-1, and GRS 1915+105.
No Time for Dead Time: Timing Analysis of Bright Black Hole Binaries with NuSTAR
NASA Astrophysics Data System (ADS)
Bachetti, Matteo; Harrison, Fiona A.; Cook, Rick; Tomsick, John; Schmid, Christian; Grefenstette, Brian W.; Barret, Didier; Boggs, Steven E.; Christensen, Finn E.; Craig, William W.; Fabian, Andrew C.; Fürst, Felix; Gandhi, Poshak; Hailey, Charles J.; Kara, Erin; Maccarone, Thomas J.; Miller, Jon M.; Pottschmidt, Katja; Stern, Daniel; Uttley, Phil; Walton, Dominic J.; Wilms, Jörn; Zhang, William W.
2015-02-01
Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (~2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploit the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339-4, Cyg X-1, and GRS 1915+105.
2013-01-01
Background For genetic association studies in designs of unrelated individuals, current statistical methodology typically models the phenotype of interest as a function of the genotype and assumes a known statistical model for the phenotype. In the analysis of complex phenotypes, especially in the presence of ascertainment conditions, the specification of such model assumptions is not straight-forward and is error-prone, potentially causing misleading results. Results In this paper, we propose an alternative approach that treats the genotype as the random variable and conditions upon the phenotype. Thereby, the validity of the approach does not depend on the correctness of assumptions about the phenotypic model. Misspecification of the phenotypic model may lead to reduced statistical power. Theoretical derivations and simulation studies demonstrate both the validity and the advantages of the approach over existing methodology. In the COPDGene study (a GWAS for Chronic Obstructive Pulmonary Disease (COPD)), we apply the approach to a secondary, quantitative phenotype, the Fagerstrom nicotine dependence score, that is correlated with COPD affection status. The software package that implements this method is available. Conclusions The flexibility of this approach enables the straight-forward application to quantitative phenotypes and binary traits in ascertained and unascertained samples. In addition to its robustness features, our method provides the platform for the construction of complex statistical models for longitudinal data, multivariate data, multi-marker tests, rare-variant analysis, and others. PMID:23448186
Ishii, Ryouhei; Canuet, Leonides; Aoki, Yasunori; Ikeda, Shunichiro; Hata, Masahiro; Iwase, Masao; Takeda, Masatoshi
2013-01-01
Adaptive nonlinear beamformer technique for analyzing magnetoencephalography (MEG) data has been proved to be powerful tool for both brain research and clinical applications. A general method of analyzing multiple subject data with a formal statistical treatment for the group data has been developed and applied for various types of MEG data. Our latest application of this method was frontal midline theta rhythm (Fmθ), which indicates focused attention and appears widely distributed over medial prefrontal areas in EEG recordings. To localize cortical generators of the magnetic counterpart of Fmθ precisely and identify cortical sources and underlying neural activity associated with mental calculation processing (i.e., arithmetic subtraction), we applied adaptive nonlinear beamformer and permutation analysis on MEG data. As a result, it was indicated that Fmθ is generated in the dorsal anterior cingulate and adjacent medial prefrontal cortex. Gamma event-related synchronization is as an index of activation in right parietal regions subserving mental subtraction associated with basic numerical processing and number-based spatial attention. Gamma desynchronization appeared in the right lateral prefrontal cortex, likely representing a mechanism to interrupt neural activity that can interfere with the ongoing cognitive task. We suggest that the combination of adaptive nonlinear beamformer and permutation analysis on MEG data is quite powerful tool to reveal the oscillatory neuronal dynamics in human brain. PMID:24110810
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1974-01-01
The capabilities are analyzed of a real aperture, forward-looking imaging radar for use as an independent landing monitor, which will provide the pilot with an independent means of assessing the progress of an automatic landing during Category 3 operations. The analysis shows that adequate ground resolution and signal-to-noise ratio can be obtained to image a runway with grassy surroundings using a radar operating at 35 GHz in good weather and in most fog but that performance is severely degraded in moderate to heavy rain and wet snow. Weather effects on a 10 GHz imager are not serious, with the possible exception of very heavy rain, but the azimuthal resolution at 10 GHz is inadequate with antennas up to 2 m long.
NASA Astrophysics Data System (ADS)
Abbas, Musharaf; Hasham, Hasan Junaid; Baig, Yasir
2016-02-01
Numerical-based finite element investigation has been conducted to explain the effect of bond coat thickness on stress distribution in traditional and nanostructured yttria-stabilized zirconia (YSZ)-based thermal barrier coatings (TBC). Stress components have been determined to quantitatively analyze the mechanical response of both kinds of coatings under the thermal shock effect. It has been found that maximum radial tensile and compressive stresses that exist at thermally grown oxide (TGO)/bond coat interface and within TGO respectively decrease with an increase in bond coat thickness. Effect of bond coat thickness on axial tensile stresses is not significant. However, axial compressive stresses that exist at the edge of the specimen near bond coat/substrate interface decrease appreciably with the increase in bond coat thickness. Residual stress profile as a function of bond coat thickness is further explained for comparative analysis of both coatings to draw some useful conclusions helpful in failure studies of TBCs.
NASA Astrophysics Data System (ADS)
Ahn, J.; Chen, L.; Davies, C. M.; Dear, J. P.
2016-11-01
In this work thin sheets of Ti-6Al-4V were full penetration welded using a 5 kW fibre laser in order to evaluate the effectiveness of high power fibre laser as a welding processing tool for welding Ti-6Al-4V with the requirements of the aircraft industry and to determine the effect of welding parameters including laser power, welding speed and beam focal position on the weld microstructure, bead profile and weld quality. It involved establishing an understanding of the influence of welding parameters on microstructural change, welding defects, and the characteristics of heat affected zone (HAZ) and weld metal (WM) of fibre laser welded joints. The optimum range of welding parameters which produced welds without cracking and porosity were identified. The influence of the welding parameters on the weld joint heterogeneity was characterised by conducting detailed microstructural analysis.
Sliced Inverse Regression for Time Series Analysis
NASA Astrophysics Data System (ADS)
Chen, Li-Sue
1995-11-01
In this thesis, general nonlinear models for time series data are considered. A basic form is x _{t} = f(beta_sp{1} {T}X_{t-1},beta_sp {2}{T}X_{t-1},... , beta_sp{k}{T}X_ {t-1},varepsilon_{t}), where x_{t} is an observed time series data, X_{t } is the first d time lag vector, (x _{t},x_{t-1},... ,x _{t-d-1}), f is an unknown function, beta_{i}'s are unknown vectors, varepsilon_{t }'s are independent distributed. Special cases include AR and TAR models. We investigate the feasibility applying SIR/PHD (Li 1990, 1991) (the sliced inverse regression and principal Hessian methods) in estimating beta _{i}'s. PCA (Principal component analysis) is brought in to check one critical condition for SIR/PHD. Through simulation and a study on 3 well -known data sets of Canadian lynx, U.S. unemployment rate and sunspot numbers, we demonstrate how SIR/PHD can effectively retrieve the interesting low-dimension structures for time series data.
Simple heterogeneity parametrization for sea surface temperature and chlorophyll
NASA Astrophysics Data System (ADS)
Skákala, Jozef; Smyth, Timothy J.
2016-06-01
Using satellite maps this paper offers a complex analysis of chlorophyll & SST heterogeneity in the shelf seas around the southwest of the UK. The heterogeneity scaling follows a simple power law and is consequently parametrized by two parameters. It is shown that in most cases these two parameters vary only relatively little with time. The paper offers a detailed comparison of field heterogeneity between different regions. How much heterogeneity is in each region preserved in the annual median data is also determined. The paper explicitly demonstrates how one can use these results to calculate representative measurement area for in situ networks.
NASA Astrophysics Data System (ADS)
Ahmadian, Mehdi; Blanchard, Emmanuel
2011-02-01
This article provides a non-dimensionalised closed-form analysis of semi-active vehicle suspensions, using a quarter-car model. The derivation of the closed-form solutions for three indices that can be used for ride comfort, vehicle handling, and stability are presented based on non-dimensionalised suspension parameters. The behaviour of semi-active vehicle suspensions is evaluated using skyhook, groundhook, and hybrid control policies, and compared with passive suspensions. The relationship between vibration isolation, suspension deflection, and road holding is studied, using three performance indices based on the mean square of the sprung mass acceleration, rattle space, and tyre deflection, respectively. The results of the study indicate that the hybrid control policy yields significantly better comfort than a passive suspension, without reducing the road-holding quality or increasing the suspension displacement for typical passenger cars. The results also indicate that for typical passenger cars, the hybrid control policy results in a better compromise between comfort, road holding and suspension travel requirements than both the skyhook and groundhook control methods.
Kato, Ryuji; Matsumoto, Megumi; Sasaki, Hiroto; Joto, Risako; Okada, Mai; Ikeda, Yurika; Kanie, Kei; Suga, Mika; Kinehara, Masaki; Yanagihara, Kana; Liu, Yujung; Uchio-Yamada, Kozue; Fukuda, Takayuki; Kii, Hiroaki; Uozumi, Takayuki; Honda, Hiroyuki; Kiyota, Yasujiro; Furue, Miho K
2016-01-01
Given the difficulties inherent in maintaining human pluripotent stem cells (hPSCs) in a healthy state, hPSCs should be routinely characterized using several established standard criteria during expansion for research or therapeutic purposes. hPSC colony morphology is typically considered an important criterion, but it is not evaluated quantitatively. Thus, we designed an unbiased method to evaluate hPSC colony morphology. This method involves a combination of automated non-labelled live-cell imaging and the implementation of morphological colony analysis algorithms with multiple parameters. To validate the utility of the quantitative evaluation method, a parent cell line exhibiting typical embryonic stem cell (ESC)-like morphology and an aberrant hPSC subclone demonstrating unusual colony morphology were used as models. According to statistical colony classification based on morphological parameters, colonies containing readily discernible areas of differentiation constituted a major classification cluster and were distinguishable from typical ESC-like colonies; similar results were obtained via classification based on global gene expression profiles. Thus, the morphological features of hPSC colonies are closely associated with cellular characteristics. Our quantitative evaluation method provides a biological definition of ‘hPSC colony morphology’, permits the non-invasive monitoring of hPSC conditions and is particularly useful for detecting variations in hPSC heterogeneity. PMID:27667091
Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Vu; Duong, Tuan
2005-01-01
A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.
Stimulated parametric emission microscopy
NASA Astrophysics Data System (ADS)
Isobe, Keisuke; Kataoka, Shogo; Murase, Rena; Watanabe, Wataru; Higashi, Tsunehito; Kawakami, Shigeki; Matsunaga, Sachihiro; Fukui, Kiichi; Itoh, Kazuyoshi
2006-01-01
We propose a novel microscopy technique based on the four-wave mixing (FWM) process that is enhanced by two-photon electronic resonance induced by a pump pulse along with stimulated emission induced by a dump pulse. A Ti:sapphire laser and an optical parametric oscillator are used as light sources for the pump and dump pulses, respectively. We demonstrate that our proposed FWM technique can be used to obtain a one-dimensional image of ethanol-thinned Coumarin 120 solution sandwiched between a hole-slide glass and a cover slip, and a two-dimensional image of a leaf of Camellia sinensis.
NASA Technical Reports Server (NTRS)
Rosu, Grigore (Inventor); Chen, Feng (Inventor); Chen, Guo-fang; Wu, Yamei; Meredith, Patrick O. (Inventor)
2014-01-01
A program trace is obtained and events of the program trace are traversed. For each event identified in traversing the program trace, a trace slice of which the identified event is a part is identified based on the parameter instance of the identified event. For each trace slice of which the identified event is a part, the identified event is added to an end of a record of the trace slice. These parametric trace slices can be used in a variety of different manners, such as for monitoring, mining, and predicting.
Time-dependent rotating stratified shear flow: exact solution and stability analysis.
Salhi, A; Cambon, C
2007-01-01
A solution of the Euler equations with Boussinesq approximation is derived by considering unbounded flows subjected to spatially uniform density stratification and shear rate that are time dependent [S(t)= partial differentialU3/partial differentialx2]. In addition to vertical stratification with constant strength N(v)2, this base flow includes an additional, horizontal, density gradient characterized by N(h)2(t). The stability of this flow is then analyzed: When the vertical stratification is stabilizing, there is a simple harmonic motion of the horizontal stratification N(h)2(t) and of the shear rate S(t), but this flow is unstable to certain disturbances, which are amplified by a Floquet mechanism. This analysis may involve an additional Coriolis effect with Coriolis parameter f, so that governing dimensionless parameters are a modified Richardson number, R=[S(0)2+N(h)4(0)/N(v)2]1/2, and f(v)=f/N(v), as well as the initial phase of the periodic shear rate. Parametric resonance between the inertia-gravity waves and the oscillating shear is demonstrated from the dispersion relation in the limit R-->0. The parametric instability has connection with both baroclinic and elliptical flow instabilities, but can develop from a very different base flow.
Wey, Andrew; Connett, John; Rudser, Kyle
2015-07-01
For estimating conditional survival functions, non-parametric estimators can be preferred to parametric and semi-parametric estimators due to relaxed assumptions that enable robust estimation. Yet, even when misspecified, parametric and semi-parametric estimators can possess better operating characteristics in small sample sizes due to smaller variance than non-parametric estimators. Fundamentally, this is a bias-variance trade-off situation in that the sample size is not large enough to take advantage of the low bias of non-parametric estimation. Stacked survival models estimate an optimally weighted combination of models that can span parametric, semi-parametric, and non-parametric models by minimizing prediction error. An extensive simulation study demonstrates that stacked survival models consistently perform well across a wide range of scenarios by adaptively balancing the strengths and weaknesses of individual candidate survival models. In addition, stacked survival models perform as well as or better than the model selected through cross-validation. Finally, stacked survival models are applied to a well-known German breast cancer study.
Time-frequency analysis using sidelobe apodization
NASA Astrophysics Data System (ADS)
Thomas, Gabriel
2001-11-01
Time-frequency techniques have been successfully used in the analysis of non-stationary signals. Several approaches have been proposed that address concerns such as Time-Frequency (TF) resolution and the elimination of cross-terms. In this work, a TF technique based on the use of Spatially Variant Apodization (SVA) is introduced that focuses on the detection of non-stationary signals that consists of several components that have different amplitudes. The SVA approach is applied to the Short-Time Fourier Transform (STFT) to detect small intensity components that are buried in high sidelobes of other components. Resolution using the SVA is better than the resolution obtained using the STFT with non-rectangular windows. Synthesis can be performed using the overlap-add method. Because of the implementation of the SVA, the modified STFT using sidelobe apodization can have good resolution, detect small intensity components, and show no cross terms in the TF plane, given that stationarity can be assumed using an appropriate window length in the STFT.
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system...
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system...
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system...
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system...
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system...
Time series analysis of temporal networks
NASA Astrophysics Data System (ADS)
Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh
2016-01-01
A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue
NASA Astrophysics Data System (ADS)
Ahmad, Waqas; Fatima, Aamira; Awan, Usman Khalid; Anwar, Arif
2014-11-01
The Indus basin of Pakistan is vulnerable to climate change which would directly affect the livelihoods of poor people engaged in irrigated agriculture. The situation could be worse in middle and lower part of this basin which occupies 90% of the irrigated area. The objective of this research is to analyze the long term meteorological trends in the middle and lower parts of Indus basin of Pakistan. We used monthly data from 1971 to 2010 and applied non-parametric seasonal Kendal test for trend detection in combination with seasonal Kendall slope estimator to quantify the magnitude of trends. The meteorological parameters considered were mean maximum and mean minimum air temperature, and rainfall from 12 meteorological stations located in the study region. We examined the reliability and spatial integrity of data by mass-curve analysis and spatial correlation matrices, respectively. Analysis was performed for four seasons (spring-March to May, summer-June to August, fall-September to November and winter-December to February). The results show that max. temperature has an average increasing trend of magnitude + 0.16, + 0.03, 0.0 and + 0.04 °C/decade during all the four seasons, respectively. The average trend of min. temperature during the four seasons also increases with magnitude of + 0.29, + 0.12, + 0.36 and + 0.36 °C/decade, respectively. Persistence of the increasing trend is more pronounced in the min. temperature as compared to the max. temperature on annual basis. Analysis of rainfall data has not shown any noteworthy trend during winter, fall and on annual basis. However during spring and summer season, the rainfall trends vary from - 1.15 to + 0.93 and - 3.86 to + 2.46 mm/decade, respectively. It is further revealed that rainfall trends during all seasons are statistically non-significant. Overall the study area is under a significant warming trend with no changes in rainfall.
NASA Technical Reports Server (NTRS)
Chiao, Raymond Y.; Kwiat, Paul G.; Steinberg, Aephraim M.
1992-01-01
The energy-time uncertainty principle is on a different footing than the momentum position uncertainty principle: in contrast to position, time is a c-number parameter, and not an operator. As Aharonov and Bohm have pointed out, this leads to different interpretations of the two uncertainty principles. In particular, one must distinguish between an inner and an outer time in the definition of the spread in time, delta t. It is the inner time which enters the energy-time uncertainty principle. We have checked this by means of a correlated two-photon light source in which the individual energies of the two photons are broad in spectra, but in which their sum is sharp. In other words, the pair of photons is in an entangled state of energy. By passing one member of the photon pair through a filter with width delta E, it is observed that the other member's wave packet collapses upon coincidence detection to a duration delta t, such that delta E(delta t) is approximately equal to planks constant/2 pi, where this duration delta t is an inner time, in the sense of Aharonov and Bohm. We have measured delta t by means of a Michelson interferometer by monitoring the visibility of the fringes seen in coincidence detection. This is a nonlocal effect, in the sense that the two photons are far away from each other when the collapse occurs. We have excluded classical-wave explanations of this effect by means of triple coincidence measurements in conjunction with a beam splitter which follows the Michelson interferometer. Since Bell's inequalities are known to be violated, we believe that it is also incorrect to interpret this experimental outcome as if energy were a local hidden variable, i.e., as if each photon, viewed as a particle, possessed some definite but unknown energy before its detection.
Non-Parametric Collision Probability for Low-Velocity Encounters
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2007-01-01
An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.
Nanoscale electromechanical parametric amplifier
Aleman, Benjamin Jose; Zettl, Alexander
2016-09-20
This disclosure provides systems, methods, and apparatus related to a parametric amplifier. In one aspect, a device includes an electron source electrode, a counter electrode, and a pumping electrode. The electron source electrode may include a conductive base and a flexible conductor. The flexible conductor may have a first end and a second end, with the second end of the flexible conductor being coupled to the conductive base. A cross-sectional dimension of the flexible conductor may be less than about 100 nanometers. The counter electrode may be disposed proximate the first end of the flexible conductor and spaced a first distance from the first end of the flexible conductor. The pumping electrode may be disposed proximate a length of the flexible conductor and spaced a second distance from the flexible conductor.
Mechanical Parametric Oscillations and Waves
ERIC Educational Resources Information Center
Dittrich, William; Minkin, Leonid; Shapovalov, Alexander S.
2013-01-01
Usually parametric oscillations are not the topic of general physics courses. Probably it is because the mathematical theory of this phenomenon is relatively complicated, and until quite recently laboratory experiments for students were difficult to implement. However parametric oscillations are good illustrations of the laws of physics and can be…
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
14 CFR 417.215 - Straight-up time analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Straight-up time analysis. 417.215 Section..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.215 Straight-up time analysis. A flight safety analysis must establish the straight-up time for a launch for use as a...
Finite difference time domain analysis of microwave ferrite devices and mobile antenna systems
NASA Astrophysics Data System (ADS)
Yildirim, Bahadir Suleyman
This dissertation presents analysis and design of shielded mobile antenna systems and microwave ferrite devices using a finite-difference time-domain method. Novel shielded antenna structures suitable for cellular communications have been analyzed and designed with emphasize on reducing excessive radiated energy absorbed in user's head and hand, while keeping the antenna performance at its peak in the presence of user. These novel antennas include a magnetically shielded antenna, a dual-resonance shielded antenna and, a shorted and truncated microstrip antenna. The effect of magnetic coating on the performance of a shielded monopole antenna is studied extensively. A parametric study is performed to analyze the dual-resonance phenomenon observed in the dual-resonance shielded antenna, optimize the antenna design within the cellular communications band, and improve the antenna performance. Input impedance, near and far fields of the dual-resonance shielded antenna are calculated using the finite-difference time-domain method. Experimental validation is also presented. In addition, performance of a shorted and truncated microstrip antenna has been investigated over a wide range of substrate parameters and dimensions. Objectives of the research work also include development of a finite-difference time-domain technique to accurately model magnetically anisotropic media, including the effect of non-uniform magnetization within the finite-size ferrite material due to demagnetizing fields. A slow wave thin film isolator and a stripline disc junction circulator are analyzed. An extensive parametric study calculates wide-band frequency-dependent parameters of these devices for various device dimensions and material parameters. Finally, a ferrite-filled stripline configuration is analyzed to study the non- linear behaviour of ferrite by introducing a modified damping factor.
Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.
George, Brandon; Aban, Inmaculada
2015-01-15
Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects.
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
Multifractal Time Series Analysis Based on Detrended Fluctuation Analysis
NASA Astrophysics Data System (ADS)
Kantelhardt, Jan; Stanley, H. Eugene; Zschiegner, Stephan; Bunde, Armin; Koscielny-Bunde, Eva; Havlin, Shlomo
2002-03-01
In order to develop an easily applicable method for the multifractal characterization of non-stationary time series, we generalize the detrended fluctuation analysis (DFA), which is a well-established method for the determination of the monofractal scaling properties and the detection of long-range correlations. We relate the new multifractal DFA method to the standard partition function-based multifractal formalism, and compare it to the wavelet transform modulus maxima (WTMM) method which is a well-established, but more difficult procedure for this purpose. We employ the multifractal DFA method to determine if the heartrhythm during different sleep stages is characterized by different multifractal properties.
Real-time Forensic Disaster Analysis
NASA Astrophysics Data System (ADS)
Wenzel, F.; Daniell, J.; Khazai, B.; Mühr, B.; Kunz-Plapp, T.; Markus, M.; Vervaeck, A.
2012-04-01
The Center for Disaster Management and Risk Reduction Technology (CEDIM, www.cedim.de) - an interdisciplinary research center founded by the German Research Centre for Geoscience (GFZ) and Karlsruhe Institute of Technology (KIT) - has embarked on a new style of disaster research known as Forensic Disaster Analysis. The notion has been coined by the Integrated Research on Disaster Risk initiative (IRDR, www.irdrinternational.org) launched by ICSU in 2010. It has been defined as an approach to studying natural disasters that aims at uncovering the root causes of disasters through in-depth investigations that go beyond the reconnaissance reports and case studies typically conducted after disasters. In adopting this comprehensive understanding of disasters CEDIM adds a real-time component to the assessment and evaluation process. By comprehensive we mean that most if not all relevant aspects of disasters are considered and jointly analysed. This includes the impact (human, economy, and infrastructure), comparisons with recent historic events, social vulnerability, reconstruction and long-term impacts on livelihood issues. The forensic disaster analysis research mode is thus best characterized as "event-based research" through systematic investigation of critical issues arising after a disaster across various inter-related areas. The forensic approach requires (a) availability of global data bases regarding previous earthquake losses, socio-economic parameters, building stock information, etc.; (b) leveraging platforms such as the EERI clearing house, relief-web, and the many sources of local and international sources where information is organized; and (c) rapid access to critical information (e.g., crowd sourcing techniques) to improve our understanding of the complex dynamics of disasters. The main scientific questions being addressed are: What are critical factors that control loss of life, of infrastructure, and for economy? What are the critical interactions
CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM
NASA Technical Reports Server (NTRS)
Mccluney, K.
1994-01-01
In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however
SEC sensor parametric test and evaluation system
NASA Technical Reports Server (NTRS)
1978-01-01
This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.
Gamma bang time analysis at OMEGA.
McEvoy, A M; Herrmann, H W; Horsfield, C J; Young, C S; Miller, E K; Mack, J M; Kim, Y; Stoeffl, W; Rubery, M; Evans, S; Sedillo, T; Ali, Z A
2010-10-01
Absolute bang time measurements with the gas Cherenkov detector (GCD) and gamma reaction history (GRH) diagnostic have been performed to high precision at the OMEGA laser facility at the University of Rochester with bang time values for the two diagnostics agreeing to within 5 ps on average. X-ray timing measurements of laser-target coupling were used to calibrate a facility-generated laser timing fiducial with rms spreads in the measured coupling times of 9 ps for both GCD and GRH. Increased fusion yields at the National Ignition Facility (NIF) will allow for improved measurement precision with the GRH easily exceeding NIF system design requirements. PMID:21033846
Gamma bang time analysis at OMEGA
McEvoy, A. M.; Herrmann, H. W.; Young, C. S.; Mack, J. M.; Kim, Y.; Evans, S.; Sedillo, T.; Horsfield, C. J.; Rubery, M.; Miller, E. K.; Stoeffl, W.; Ali, Z. A.
2010-10-15
Absolute bang time measurements with the gas Cherenkov detector (GCD) and gamma reaction history (GRH) diagnostic have been performed to high precision at the OMEGA laser facility at the University of Rochester with bang time values for the two diagnostics agreeing to within 5 ps on average. X-ray timing measurements of laser-target coupling were used to calibrate a facility-generated laser timing fiducial with rms spreads in the measured coupling times of 9 ps for both GCD and GRH. Increased fusion yields at the National Ignition Facility (NIF) will allow for improved measurement precision with the GRH easily exceeding NIF system design requirements.
Gamma bang time analysis at OMEGAa)
NASA Astrophysics Data System (ADS)
McEvoy, A. M.; Herrmann, H. W.; Horsfield, C. J.; Young, C. S.; Miller, E. K.; Mack, J. M.; Kim, Y.; Stoeffl, W.; Rubery, M.; Evans, S.; Sedillo, T.; Ali, Z. A.
2010-10-01
Absolute bang time measurements with the gas Cherenkov detector (GCD) and gamma reaction history (GRH) diagnostic have been performed to high precision at the OMEGA laser facility at the University of Rochester with bang time values for the two diagnostics agreeing to within 5 ps on average. X-ray timing measurements of laser-target coupling were used to calibrate a facility-generated laser timing fiducial with rms spreads in the measured coupling times of 9 ps for both GCD and GRH. Increased fusion yields at the National Ignition Facility (NIF) will allow for improved measurement precision with the GRH easily exceeding NIF system design requirements.
Parametric Mass Reliability Study
NASA Technical Reports Server (NTRS)
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn
2015-01-01
-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872
Carr, Steven M.; Duggan, Ana T.; Stenson, Garry B.; Marshall, H. Dawn
2015-01-01
-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872
Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn
2015-01-01
-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed.
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.
Parametric Equations, Maple, and Tubeplots.
ERIC Educational Resources Information Center
Feicht, Louis
1997-01-01
Presents an activity that establishes a graphical foundation for parametric equations by using a graphing output form called tubeplots from the computer program Maple. Provides a comprehensive review and exploration of many previously learned topics. (ASK)
Parametric-Studies and Data-Plotting Modules for the SOAP
NASA Technical Reports Server (NTRS)
2008-01-01
"Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.
Real-time DNA microarray analysis
Hassibi, Arjang; Vikalo, Haris; Riechmann, José Luis; Hassibi, Babak
2009-01-01
We present a quantification method for affinity-based DNA microarrays which is based on the real-time measurements of hybridization kinetics. This method, i.e. real-time DNA microarrays, enhances the detection dynamic range of conventional systems by being impervious to probe saturation in the capturing spots, washing artifacts, microarray spot-to-spot variations, and other signal amplitude-affecting non-idealities. We demonstrate in both theory and practice that the time-constant of target capturing in microarrays, similar to all affinity-based biosensors, is inversely proportional to the concentration of the target analyte, which we subsequently use as the fundamental parameter to estimate the concentration of the analytes. Furthermore, to empirically validate the capabilities of this method in practical applications, we present a FRET-based assay which enables the real-time detection in gene expression DNA microarrays. PMID:19723688
Real-time DNA microarray analysis.
Hassibi, Arjang; Vikalo, Haris; Riechmann, José Luis; Hassibi, Babak
2009-11-01
We present a quantification method for affinity-based DNA microarrays which is based on the real-time measurements of hybridization kinetics. This method, i.e. real-time DNA microarrays, enhances the detection dynamic range of conventional systems by being impervious to probe saturation in the capturing spots, washing artifacts, microarray spot-to-spot variations, and other signal amplitude-affecting non-idealities. We demonstrate in both theory and practice that the time-constant of target capturing in microarrays, similar to all affinity-based biosensors, is inversely proportional to the concentration of the target analyte, which we subsequently use as the fundamental parameter to estimate the concentration of the analytes. Furthermore, to empirically validate the capabilities of this method in practical applications, we present a FRET-based assay which enables the real-time detection in gene expression DNA microarrays. PMID:19723688
Diode-pumped optical parametric oscillator
Geiger, A.R.; Hemmati, H.; Farr, W.H.
1996-02-01
Diode-pumped optical parametric oscillation has been demonstrated for the first time to our knowledge in a single Nd:MgO:LiNbO{sub 3} nonlinear crystal. The crystal is pumped by a semiconductor diode laser array at 812 nm. The Nd{sup 3+} ions absorb the 812-nm radiation to generate 1084-nm laser oscillation. On internal {ital Q} switching the 1084-nm radiation pumps the LiNbO{sub 3} host crystal that is angle cut at 46.5{degree} and generates optical parametric oscillation. The oscillation threshold that is due to the 1084-nm laser pump with a pulse length of 80 ns in a 1-mm-diameter beam was measured to be {approx_equal}1 mJ and produced 0.5-mJ output at 3400-nm signal wavelength. {copyright} {ital 1996 Optical Society of America.}
Parametric instabilities in large nonuniform laser plasmas
Baldis, H.A.; Montgomery, D.S.; Moody, J.D.; Estabrook, K.G.; Berger, R.L.; Kruer, W.L.; Labaune, C.; Batha, S.H.
1992-09-01
The study of parametric instabilities in laser plasmas is of vital importance for inertial confinement fusion (ICF). The long scale-length plasma encountered in the corona of an ICF target provides ideal conditions for the growth of instabilities such as stimulated Brillouin scattering (SBS), stimulated Raman scattering (SRS), and filamentation. These instabilities can have detrimental effects in ICF and their characterization and understanding is of importance. Scattering instabilities are driven through a feedback loop by which the beating between the electromagnetic EM fields of the laser and the scattered light matches the frequency of a local longitudinal mode of the plasma. Any process which interferes with the coherence of this mechanism can substantially alter the behavior of the instability. Of particular interest is the study of laser beam smoothing techniques on parametric instabilities. These techniques are used to improve irradiation uniformity which can suppress hydrodynamic instabilities. Laser beam smoothing techniques have the potential to control the scattering level from parametric instabilities since they provide not only a smoother laser intensity distribution, but also reduced coherence. Beam smoothing techniques that affect the growth of parametric instabilities include spatial smoothing and temporal smoothing by laser bandwidth. Spatial smoothing modifies the phase fronts and temporal distribution of intensities in the focal volume. The transverse intensity spectrum is shifted towards higher spatial wavenumber and can significantly limit the growth of filamentation. Temporal smoothing reduces the coherence time and consequently limits the growth time. Laser bandwidth is required for most smoothing techniques, and can have an independent effect on the instabilities as well.
Airy beam optical parametric oscillator
NASA Astrophysics Data System (ADS)
Aadhi, A.; Chaitanya, N. Apurv; Jabir, M. V.; Vaity, Pravin; Singh, R. P.; Samanta, G. K.
2016-05-01
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51–1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond).
Airy beam optical parametric oscillator.
Aadhi, A; Chaitanya, N Apurv; Jabir, M V; Vaity, Pravin; Singh, R P; Samanta, G K
2016-01-01
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51-1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond). PMID:27143582
Airy beam optical parametric oscillator.
Aadhi, A; Chaitanya, N Apurv; Jabir, M V; Vaity, Pravin; Singh, R P; Samanta, G K
2016-05-04
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51-1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond).
Airy beam optical parametric oscillator
Aadhi, A.; Chaitanya, N. Apurv; Jabir, M. V.; Vaity, Pravin; Singh, R. P.; Samanta, G. K.
2016-01-01
Airy beam, a non-diffracting waveform, has peculiar properties of self-healing and self-acceleration. Due to such unique properties, the Airy beam finds many applications including curved plasma wave-guiding, micro-particle manipulation, optically mediated particle clearing, long distance communication, and nonlinear frequency conversion. However, many of these applications including laser machining of curved structures, generation of curved plasma channels, guiding of electric discharges in a curved path, study of nonlinear propagation dynamics, and nonlinear interaction demand Airy beam with high power, energy, and wavelength tunability. Till date, none of the Airy beam sources have all these features in a single device. Here, we report a new class of coherent sources based on cubic phase modulation of a singly-resonant optical parametric oscillator (OPO), producing high-power, continuous-wave (cw), tunable radiation in 2-D Airy intensity profile existing over a length >2 m. Based on a MgO-doped periodically poled LiNbO3 crystal pumped at 1064 nm, the Airy beam OPO produces output power more than 8 W, and wavelength tunability across 1.51–1.97 μm. This demonstration gives new direction for the development of sources of arbitrary structured beams at any wavelength, power, and energy in all time scales (cw to femtosecond). PMID:27143582
Scalable Hyper-parameter Estimation for Gaussian Process Based Time Series Analysis
Chandola, Varun; Vatsavai, Raju
2010-01-01
Gaussian process (GP) is increasingly becoming popular as a kernel machine learning tool for non-parametric data analysis. Recently, GP has been applied to model non-linear dependencies in time series data. GP based analysis can be used to solve problems of time series prediction, forecasting, missing data imputation, change point detection, anomaly detection, etc. But the use of GP to handle massive scientific time series data sets has been limited, owing to its expensive computational complexity. The primary bottleneck is the handling of the covariance matrix whose size is quadratic in the length of the time series. In this paper we propose a scalable method that exploit the special structure of the covariance matrix for hyper-parameter estimation in GP based learning. The proposed method allows estimation of hyper parameters associated with GP in quadratic time, which is an order of magnitude improvement over standard methods with cubic complexity. Moreover, the proposed method does not require explicit computation of the covariance matrix and hence has memory requirement linear to the length of the time series as opposed to the quadratic memory requirement of standard methods. To further improve the computational complexity of the proposed method, we provide a parallel version to concurrently estimate the log likelihood for a set of time series which is the key step in the hyper-parameter estimation. Performance results on a multi-core system show that our proposed method provides significant speedups as high as 1000, even when running in serial mode, while maintaining a small memory footprint. The parallel version exploits the natural parallelization potential of the serial algorithm and is shown to perform significantly better than the serial faster algorithm, with speedups as high as 10.
Baker, C.
1994-10-01
The Department of Energy`s (DOE) Hanford site near Richland, Washington is being cleaned up after 50 years of nuclear materials production. One of the most serious problems at the site is the waste stored in single-shell underground storage tanks. There are 149 of these tanks containing the spent fuel residue remaining after the fuel is dissolved in acid and the desired materials (primarily plutonium and uranium) are separated out. The tanks are upright cylinders 75 ft. in diameter with domed tops. They are made of reinforced concrete, have steel liners, and each tank is buried under 7--12 ft. of overburden. The tanks are up to 40-ft. high, and have capacities of 500,000, 750,000, or 1,000,000 gallons of waste. As many as one-third of these tanks are known or suspected to leak. The waste form contained in the tanks varies in consistency from liquid supernatant to peanut-butter-like gels and sludges to hard salt cake (perhaps as hard as low-grade concrete). The current waste retrieval plan is to insert a large long-reach manipulator through a hole cut in the top of the tank, and use a variety of end-effectors to mobilize the waste and remove it from the tank. PNL has, with the assistance of Deneb robotics employees, developed a means of using the IGRIP code to perform parametric design of mechanical systems. This method requires no modifications to the IGRIP code, and all design data are stored in the IGRIP workcell. The method is presented in the context of development of a passive articulated mechanism that is used to deliver down-arm services to a gantry robot. The method is completely general, however, and could be used to design a fully articulated manipulator. Briefly, the method involves using IGCALC expressions to control manipulator joint angles, and IGCALC variables to allow user control of link lengths and offsets. This paper presents the method in detail, with examples drawn from PNL`s experience with the gantry robot service-providing mechanism.
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
Natural time analysis of critical phenomena
Varotsos, Panayiotis; Sarlis, Nicholas V.; Skordas, Efthimios S.; Uyeda, Seiya; Kamogawa, Masashi
2011-01-01
A quantity exists by which one can identify the approach of a dynamical system to the state of criticality, which is hard to identify otherwise. This quantity is the variance of natural time χ, where and pk is the normalized energy released during the kth event of which the natural time is defined as χk = k/N and N stands for the total number of events. Then we show that κ1 becomes equal to 0.070 at the critical state for a variety of dynamical systems. This holds for criticality models such as 2D Ising and the Bak–Tang–Wiesenfeld sandpile, which is the standard example of self-organized criticality. This condition of κ1 = 0.070 holds for experimental results of critical phenomena such as growth of rice piles, seismic electric signals, and the subsequent seismicity before the associated main shock. PMID:21700886
Time-frequency analysis of pediatric murmurs
NASA Astrophysics Data System (ADS)
Lombardo, Joseph S.; Blodgett, Lisa A.; Rosen, Ron S.; Najmi, Amir-Homayoon; Thompson, W. Reid
1998-05-01
Technology has provided many new tools to assist in the diagnosis of pathologic conditions of the heart. Echocardiography, Ultrafast CT, and MRI are just a few. While these tools are a valuable resource, they are typically too expensive, large and complex in operation for use in rural, homecare, and physician's office settings. Recent advances in computer performance, miniaturization, and acoustic signal processing, have yielded new technologies that when applied to heart sounds can provide low cost screening for pathologic conditions. The short duration and transient nature of these signals requires processing techniques that provide high resolution in both time and frequency. Short-time Fourier transforms, Wigner distributions, and wavelet transforms have been applied to signals form hearts with various pathologic conditions. While no single technique provides the ideal solution, the combination of tools provides a good representation of the acoustic features of the pathologies selected.
A time course analysis of enriched composition.
McElree, Brian; Pylkkänen, Liina; Pickering, Martin J; Traxler, Matthew J
2006-02-01
Linguistic analyses suggest that common and seemingly simple expressions, such as began the book, cannot be interpreted with simple compositional processes; rather, they require enriched composition to provide an interpretation, such as began reading the book (Jackendoff, 1997; Pustejovsky, 1995). Recent reading time studies have supported these accounts by providing evidence that these expressions are more costly to process than are minimally contrasting controls (e.g., McElree, Traxler, Pickering, Seely, and Jackendoff, 2001). We report a response signal speed-accuracy trade-off (SAT) study in which two types of expressions that are thought to require enriched composition were examined. Analyses of the full time course SAT data indicate that these expressions were interpreted less accurately and, most importantly, more slowly than control sentences. The latter finding suggests that enriched composition requires the online deployment of complex compositional operations.
Applying time series analysis to performance logs
NASA Astrophysics Data System (ADS)
Kubacki, Marcin; Sosnowski, Janusz
2015-09-01
Contemporary computer systems provide mechanisms for monitoring various performance parameters (e.g. processor or memory usage, disc or network transfers), which are collected and stored in performance logs. An important issue is to derive characteristic features describing normal and abnormal behavior of the systems. For this purpose we use various schemes of analyzing time series. They have been adapted to the specificity of performance logs and verified using data collected from real systems. The presented approach is useful in evaluating system dependability.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
Multiscale multifractal time irreversibility analysis of stock markets
NASA Astrophysics Data System (ADS)
Jiang, Chenguang; Shang, Pengjian; Shi, Wenbin
2016-11-01
Time irreversibility is one of the most important properties of nonstationary time series. Complex time series often demonstrate even multiscale time irreversibility, such that not only the original but also coarse-grained time series are asymmetric over a wide range of scales. We study the multiscale time irreversibility of time series. In this paper, we develop a method called multiscale multifractal time irreversibility analysis (MMRA), which allows us to extend the description of time irreversibility to include the dependence on the segment size and statistical moments. We test the effectiveness of MMRA in detecting multifractality and time irreversibility of time series generated from delayed Henon map and binomial multifractal model. Then we employ our method to the time irreversibility analysis of stock markets in different regions. We find that the emerging market has higher multifractality degree and time irreversibility compared with developed markets. In this sense, the MMRA method may provide new angles in assessing the evolution stage of stock markets.
NASA Technical Reports Server (NTRS)
Converse, G. L.
1984-01-01
A modeling technique for single stage flow modulating fans or centrifugal compressors has been developed which will enable the user to obtain consistent and rapid off-design performnce from design point input. The fan flow modulation may be obtained by either a VIGV (variable inlet guide vane) or a VPF (variable pitch rotor) option. Only the VIGV option is available for the centrifugal compressor. The modeling technique has been incorporated into a time-sharing program to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and examples cases, it is suitable as a user's manual. This report is the last of a three volume set describing the parametric representation of compressor fans, and turbines. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulating Flow Fan).
Multicutter machining of compound parametric surfaces
NASA Astrophysics Data System (ADS)
Hatna, Abdelmadjid; Grieve, R. J.; Broomhead, P.
2000-10-01
Parametric free forms are used in industries as disparate as footwear, toys, sporting goods, ceramics, digital content creation, and conceptual design. Optimizing tool path patterns and minimizing the total machining time is a primordial issue in numerically controlled (NC) machining of free form surfaces. We demonstrate in the present work that multi-cutter machining can achieve as much as 60% reduction in total machining time for compound sculptured surfaces. The given approach is based upon the pre-processing as opposed to the usual post-processing of surfaces for the detection and removal of interference followed by precise tracking of unmachined areas.
Partial spectral analysis of hydrological time series
NASA Astrophysics Data System (ADS)
Jukić, D.; Denić-Jukić, V.
2011-03-01
SummaryHydrological time series comprise the influences of numerous processes involved in the transfer of water in hydrological cycle. It implies that an ambiguity with respect to the processes encoded in spectral and cross-spectral density functions exists. Previous studies have not paid attention adequately to this issue. Spectral and cross-spectral density functions represent the Fourier transforms of auto-covariance and cross-covariance functions. Using this basic property, the ambiguity is resolved by applying a novel approach based on the spectral representation of partial correlation. Mathematical background for partial spectral density, partial amplitude and partial phase functions is presented. The proposed functions yield the estimates of spectral density, amplitude and phase that are not affected by a controlling process. If an input-output relation is the subject of interest, antecedent and subsequent influences of the controlling process can be distinguished considering the input event as a referent point. The method is used for analyses of the relations between the rainfall, air temperature and relative humidity, as well as the influences of air temperature and relative humidity on the discharge from karst spring. Time series are collected in the catchment of the Jadro Spring located in the Dinaric karst area of Croatia.
ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams
Time-dependent spectral analysis of epidemiological time-series with wavelets.
Cazelles, Bernard; Chavez, Mario; Magny, Guillaume Constantin de; Guégan, Jean-Francois; Hales, Simon
2007-08-22
In the current context of global infectious disease risks, a better understanding of the dynamics of major epidemics is urgently needed. Time-series analysis has appeared as an interesting approach to explore the dynamics of numerous diseases. Classical time-series methods can only be used for stationary time-series (in which the statistical properties do not vary with time). However, epidemiological time-series are typically noisy, complex and strongly non-stationary. Given this specific nature, wavelet analysis appears particularly attractive because it is well suited to the analysis of non-stationary signals. Here, we review the basic properties of the wavelet approach as an appropriate and elegant method for time-series analysis in epidemiological studies. The wavelet decomposition offers several advantages that are discussed in this paper based on epidemiological examples. In particular, the wavelet approach permits analysis of transient relationships between two signals and is especially suitable for gradual change in force by exogenous variables.
Parametric infrared tunable laser system
NASA Technical Reports Server (NTRS)
Garbuny, M.; Henningsen, T.; Sutter, J. R.
1980-01-01
A parametric tunable infrared laser system was built to serve as transmitter for the remote detection and density measurement of pollutant, poisonous, or trace gases in the atmosphere. The system operates with a YAG:Nd laser oscillator amplifier chain which pumps a parametric tunable frequency converter. The completed system produced pulse energies of up to 30 mJ. The output is tunable from 1.5 to 3.6 micrometers at linewidths of 0.2-0.5 /cm (FWHM), although the limits of the tuning range and the narrower line crystals presently in the parametric converter by samples of the higher quality already demonstrated is expected to improve the system performance further.
[Time analysis of pharmacists' home visit duties].
Horii, Kota; Hirasawa, Shinichiro; Toshima, Chiaki; Morimoto, Masao; Shiraishi, Takeya; Maeda, Keigo; Hirohara, Masayoshi; Kushida, Kazuki
2014-12-01
Pharmacists' home visit duties include accepting prescriptions, making inquiries with the attending physician, preparing medication, giving guidance on medication, going on home visits(round trips), and handling insurance claims. In the present study, we measured the time required for each duty and determined work duration per visit. The study took place during September 2013; the sample included 116 subjects and 211 home visits were conducted. Pharmacists' mean work duration per visit was 73 min and 15 s; they spent an average of 36 min and 13 s of this period on round trips involving home visits and providing guidance on medication. Pharmacists are responsible for preparing medication for outpatients; therefore, it can be assumed that they can visit patients at their homes only during the estimated 2-h period after lunch and before the commencement of outpatient service in the evening. Therefore, it is difficult for one pharmacist to visit more than five patients a day. The number of patients that each pharmacist can visit in one week(6 working days)is estimated at 25-30.
Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L
2013-02-01
The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies.
Gleason, Scott D; Witkin, Jeffrey M
2007-01-01
The Vogel conflict test has been widely used as a methodology for detecting anxiolytic-like effects of drugs with a broad spectrum of pharmacological activities. Despite widespread acceptance of the Vogel assay as a preclinical predictor of efficacy for anxiolytic-like compounds, detailed parametrics have not been reported on the optimization of this assay to determine how the schedule of reinforcement, the rate of responding and the frequency and temporal distribution of punishing events determine drug effect. The current report documents results of a systematic study of the relationship between number of shocks delivered and efficacy of the prototypical 1,4-benzodiazepine anxiolytic chlordiazepoxide (CDAP) in rats. Under this procedure, water-deprived rats were given access to water and during the later part of this access period, contacts with the drinking tube produced a brief electric shock. CDAP (5-20 mg/kg, i.p.) was first tested under a fixed-ratio 20 response schedule (every 20 th lick produced shock delivered via the sipper tube). CDAP produced dose-dependent increases in punished licking to approximately 275% of control at 20 mg/kg. Increasing the number of shocks during the first ten responses of the punishment component decreased the number of licks made under vehicle control conditions. The frequency of shock delivery produced both quantitative and qualitative changes in the effects of chlordiazepoxide ranging from no effect to 7000% increases in responding. The effects of chlordiazepoxide were dependent both on the control rate of responding and, independently, on the frequency of shock deliveries. Parametric variation under the Vogel conflict test may be useful in comparing the efficacy of novel approaches to the treatment of anxiety disorders.
Gleason, Scott D; Witkin, Jeffrey M
2007-01-01
The Vogel conflict test has been widely used as a methodology for detecting anxiolytic-like effects of drugs with a broad spectrum of pharmacological activities. Despite widespread acceptance of the Vogel assay as a preclinical predictor of efficacy for anxiolytic-like compounds, detailed parametrics have not been reported on the optimization of this assay to determine how the schedule of reinforcement, the rate of responding and the frequency and temporal distribution of punishing events determine drug effect. The current report documents results of a systematic study of the relationship between number of shocks delivered and efficacy of the prototypical 1,4-benzodiazepine anxiolytic chlordiazepoxide (CDAP) in rats. Under this procedure, water-deprived rats were given access to water and during the later part of this access period, contacts with the drinking tube produced a brief electric shock. CDAP (5-20 mg/kg, i.p.) was first tested under a fixed-ratio 20 response schedule (every 20 th lick produced shock delivered via the sipper tube). CDAP produced dose-dependent increases in punished licking to approximately 275% of control at 20 mg/kg. Increasing the number of shocks during the first ten responses of the punishment component decreased the number of licks made under vehicle control conditions. The frequency of shock delivery produced both quantitative and qualitative changes in the effects of chlordiazepoxide ranging from no effect to 7000% increases in responding. The effects of chlordiazepoxide were dependent both on the control rate of responding and, independently, on the frequency of shock deliveries. Parametric variation under the Vogel conflict test may be useful in comparing the efficacy of novel approaches to the treatment of anxiety disorders. PMID:17583779
Pérez, Hector E; Kettner, Keith
2013-10-01
Time-to-event analysis represents a collection of relatively new, flexible, and robust statistical techniques for investigating the incidence and timing of transitions from one discrete condition to another. Plant biology is replete with examples of such transitions occurring from the cellular to population levels. However, application of these statistical methods has been rare in botanical research. Here, we demonstrate the use of non- and semi-parametric time-to-event and categorical data analyses to address questions regarding seed to seedling transitions of Ipomopsis rubra propagules exposed to various doses of constant or simulated seasonal diel temperatures. Seeds were capable of germinating rapidly to >90 % at 15-25 or 22/11-29/19 °C. Optimum temperatures for germination occurred at 25 or 29/19 °C. Germination was inhibited and seed viability decreased at temperatures ≥30 or 33/24 °C. Kaplan-Meier estimates of survivor functions indicated highly significant differences in temporal germination patterns for seeds exposed to fluctuating or constant temperatures. Extended Cox regression models specified an inverse relationship between temperature and the hazard of germination. Moreover, temperature and the temperature × day interaction had significant effects on germination response. Comparisons to reference temperatures and linear contrasts suggest that summer temperatures (33/24 °C) play a significant role in differential germination responses. Similarly, simple and complex comparisons revealed that the effects of elevated temperatures predominate in terms of components of seed viability. In summary, the application of non- and semi-parametric analyses provides appropriate, powerful data analysis procedures to address various topics in seed biology and more widespread use is encouraged.
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
NASA Astrophysics Data System (ADS)
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
Membrane reactor for water detritiation: a parametric study on operating parameters
Mascarade, J.; Liger, K.; Troulay, M.; Perrais, C.
2015-03-15
This paper presents the results of a parametric study done on a single stage finger-type packed-bed membrane reactor (PBMR) used for heavy water vapor de-deuteration. Parametric studies have been done on 3 operating parameters which are: the membrane temperature, the total feed flow rate and the feed composition through D{sub 2}O content variations. Thanks to mass spectrometer analysis of streams leaving the PBMR, speciation of deuterated species was achieved. Measurement of the amounts of each molecular component allowed the calculation of reaction quotient at the packed-bed outlet. While temperature variation mainly influences permeation efficiency, feed flow rate perturbation reveals dependence of conversion and permeation properties to contact time between catalyst and reacting mixture. The study shows that isotopic exchange reactions occurring on the catalyst particles surface are not thermodynamically balanced. Moreover, the variation of the heavy water content in the feed exhibits competition between permeation and conversion kinetics.
Physiological responses at short distances from a parametric speaker
2012-01-01
In recent years, parametric speakers have been used in various circumstances. In our previous studies, we verified that the physiological burden of the sound of parametric speaker set at 2.6 m from the subjects was lower than that of the general speaker. However, nothing has yet been demonstrated about the effects of the sound of a parametric speaker at the shorter distance between parametric speakers the human body. Therefore, we studied this effect on physiological functions and task performance. Nine male subjects participated in this study. They completed three consecutive sessions: a 20-minute quiet period as a baseline, a 30-minute mental task period with general speakers or parametric speakers, and a 20-minute recovery period. We measured electrocardiogram (ECG) photoplethysmogram (PTG), electroencephalogram (EEG), systolic and diastolic blood pressure. Four experiments, one with a speaker condition (general speaker and parametric speaker), the other with a distance condition (0.3 m and 1.0 m), were conducted respectively at the same time of day on separate days. To examine the effects of the speaker and distance, three-way repeated measures ANOVA (speaker factor x distance factor x time factor) were conducted. In conclusion, we found that the physiological responses were not significantly different between the speaker condition and the distance condition. Meanwhile, it was shown that the physiological burdens increased with progress in time independently of speaker condition and distance condition. In summary, the effects of the parametric speaker at the 2.6 m distance were not obtained at the distance of 1 m or less. PMID:22737994
NASA Astrophysics Data System (ADS)
Curceac, S.; Ternynck, C.; Ouarda, T.
2015-12-01
Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed
Analysis and Design of Time-Varying Filter Banks
NASA Astrophysics Data System (ADS)
Sodagar, Iraj
Analysis-synthesis filter banks have been studied extensively and a wide range of theoretical problems have been subsequently addressed. However, almost all the research activity has been concentrated on time-invariant filter banks whose components are fixed and do not change in time. The objective of this thesis is to develop analysis and design techniques for time-varying FIR analysis-synthesis filter banks that are perfect reconstructing (PR). In such systems, the analysis and/or synthesis filters, the down-up sampling rates, or even the number of bands can change in time. The underlying idea is that by adapting the basis functions of the filter bank transform to the signal properties, one can represent the relevant information of the signal more efficiently. For analysis purposes, we derive the time-varying impulse response of the filter bank in terms of the analysis and synthesis filter coefficients. We are able to represent this impulse response in terms of the product of the analysis and synthesis matrix transforms. Our approach to the PR time-varying filter bank design is to change the analysis -synthesis filter bank among a set of time-invariant filter banks. The analysis filter banks are switched instantaneously. To eliminate the distortion during switching, a new time-varying synthesis section is designed for each transition. Three design techniques are developed for the time-varying filter bank design. The first technique uses the least squares synthesis filters. This method improves the reconstruction quality significantly, but does not usually achieve the perfect reconstruction. Using the second technique, one can design PR time-varying systems by redesigning the analysis filters. The drawback is that this method requires numerical optimizations. The third technique introduces a new structure for exactly reconstructing time-varying filter banks. This structure consists of the conventional filter bank followed by a time-varying post filter. The post
Automatic Parametric Testing Of Integrated Circuits
NASA Technical Reports Server (NTRS)
Jennings, Glenn A.; Pina, Cesar A.
1989-01-01
Computer program for parametric testing saves time and effort in research and development of integrated circuits. Software system automatically assembles various types of test structures and lays them out on silicon chip, generates sequency of test instructions, and interprets test data. Employs self-programming software; needs minimum of human intervention. Adapted to needs of different laboratories and readily accommodates new test structures. Program codes designed to be adaptable to most computers and test equipment now in use. Written in high-level languages to enhance transportability.
NASA Astrophysics Data System (ADS)
Curtis-Lake, E.; McLure, R. J.; Dunlop, J. S.; Rogers, A. B.; Targett, T.; Dekel, A.; Ellis, R. S.; Faber, S. M.; Ferguson, H. C.; Grogin, N. A.; Kocevski, D. D.; Koekemoer, A. M.; Lai, K.; Mármol-Queraltó, E.; Robertson, B. E.
2016-03-01
We present the results of a study investigating the sizes and morphologies of redshift 4 < z < 8 galaxies in the CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) GOODS-S (Great Observatories Origins Deep Survey southern field), HUDF (Hubble Ultra-Deep Field) and HUDF parallel fields. Based on non-parametric measurements and incorporating a careful treatment of measurement biases, we quantify the typical size of galaxies at each redshift as the peak of the lognormal size distribution, rather than the arithmetic mean size. Parametrizing the evolution of galaxy half-light radius as r50 ∝ (1 + z)n, we find n = -0.20 ± 0.26 at bright UV-luminosities (0.3L*(z = 3) < L < L*) and n = -0.47 ± 0.62 at faint luminosities (0.12L* < L < 0.3L*). Furthermore, simulations based on artificially redshifting our z ˜ 4 galaxy sample show that we cannot reject the null hypothesis of no size evolution. We show that this result is caused by a combination of the size-dependent completeness of high-redshift galaxy samples and the underestimation of the sizes of the largest galaxies at a given epoch. To explore the evolution of galaxy morphology we first compare asymmetry measurements to those from a large sample of simulated single Sérsic profiles, in order to robustly categorize galaxies as either `smooth' or `disturbed'. Comparing the disturbed fraction amongst bright (M1500 ≤ -20) galaxies at each redshift to that obtained by artificially redshifting our z ˜ 4 galaxy sample, while carefully matching the size and UV-luminosity distributions, we find no clear evidence for evolution in galaxy morphology over the redshift interval 4 < z < 8. Therefore, based on our results, a bright (M1500 ≤ -20) galaxy at z ˜ 6 is no more likely to be measured as `disturbed' than a comparable galaxy at z ˜ 4, given the current observational constraints.
Mapping Rise Time Information with Down-Shift Analysis
Tunnell, T. W., Machorro, E. A., Diaz, A. B.
2011-11-01
These viewgraphs summarize the application of recent developments in digital down-shift (DDS) analysis of up converted PDV data to map out how well the PDV diagnostic would capture rise time information (mid point and rise time) in short rise time (<1 ns) shock events. The mapping supports a PDV vs VISAR challenge. The analysis concepts are new (~September FY 2011), simple, and run quickly, which makes them good tools to map out (with ~1 million Monte Carlo simulations) how well PDV captures rise time information as function of baseline velocity, rise time, velocity jump, and signal-to-noise ratios.
Speech processing based on short-time Fourier analysis
Portnoff, M.R.
1981-06-02
Short-time Fourier analysis (STFA) is a mathematical technique that represents nonstationary signals, such as speech, music, and seismic signals in terms of time-varying spectra. This representation provides a formalism for such intuitive notions as time-varying frequency components and pitch contours. Consequently, STFA is useful for speech analysis and speech processing. This paper shows that STFA provides a convenient technique for estimating and modifying certain perceptual parameters of speech. As an example of an application of STFA of speech, the problem of time-compression or expansion of speech, while preserving pitch and time-varying frequency content is presented.
Dynamic Factor Analysis Models with Time-Varying Parameters
ERIC Educational Resources Information Center
Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian
2011-01-01
Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model…
Parametric State Space Structuring
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Tilgner, Marco
1997-01-01
Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
Scale-space analysis of time series in circulatory research.
Mortensen, Kim Erlend; Godtliebsen, Fred; Revhaug, Arthur
2006-12-01
Statistical analysis of time series is still inadequate within circulation research. With the advent of increasing computational power and real-time recordings from hemodynamic studies, one is increasingly dealing with vast amounts of data in time series. This paper aims to illustrate how statistical analysis using the significant nonstationarities (SiNoS) method may complement traditional repeated-measures ANOVA and linear mixed models. We applied these methods on a dataset of local hepatic and systemic circulatory changes induced by aortoportal shunting and graded liver resection. We found SiNoS analysis more comprehensive when compared with traditional statistical analysis in the following four ways: 1) the method allows better signal-to-noise detection; 2) including all data points from real time recordings in a statistical analysis permits better detection of significant features in the data; 3) analysis with multiple scales of resolution facilitates a more differentiated observation of the material; and 4) the method affords excellent visual presentation by combining group differences, time trends, and multiscale statistical analysis allowing the observer to quickly view and evaluate the material. It is our opinion that SiNoS analysis of time series is a very powerful statistical tool that may be used to complement conventional statistical methods.
Distributed parametric effect in long lines and its applications
NASA Astrophysics Data System (ADS)
Shestopaloff, Yuri K.
2011-10-01
The article considers a parametric effect which takes place when the velocity of signal propagation in a long line changes. We found the analytical solution describing the form of the transformed signal for a line with losses, when line parameters change symmetrically. We also considered lines without losses, with asymmetrical change of parameters. Our theoretical results comply with experimental data. In certain conditions, such a line can be used as an amplifier. The parametric effect in optics is described by Maxwell's equations, while in case of a long line, the analysis is based on telegrapher's equations. However, it turns out that in the end, both in optics and electronics, the parametric effect is described by wave equations that are mathematically similar. This is because fundamentally, when the parameters of the propagating medium change, the parametric effect is physically based on energy interchange between the controlling (pump) signal and the transformed one. So, the obtained results can be used for analysis of parametric effects in optics and electronics.
Strong squeezing in periodically modulated optical parametric oscillators
NASA Astrophysics Data System (ADS)
Adamyan, Hayk H.; Bergou, János A.; Gevorgyan, Narine T.; Kryuchkyan, Gagik Yu.
2015-11-01
We consider specific signatures of squeezing for time-modulated light fields and propose the scheme of an optical parametric oscillator driven by a continuously modulated pump field. The application of a periodically modulated driving field instead of a continuous wave field drastically improves the degree of quadrature integral squeezing. This quantity goes below the standard limit of 50% relative to the level of vacuum fluctuations. We develop semiclassical and quantum theories of an optical parametric oscillator under the influence of a pump field with harmonically modulated amplitude for all operational regimes, including numerical simulations at the threshold point. The results can be directly applied in time-resolved quantum communication protocols.
Testing for predator dependence in predator-prey dynamics: a non-parametric approach.
Jost, C; Ellner, S P
2000-08-22
The functional response is a key element in all predator-prey interactions. Although functional responses are traditionally modelled as being a function of prey density only, evidence is accumulating that predator density also has an important effect. However, much of the evidence comes from artificial experimental arenas under conditions not necessarily representative of the natural system, and neglecting the temporal dynamics of the organism (in particular the effects of prey depletion on the estimated functional response). Here we present a method that removes these limitations by reconstructing the functional response non-parametrically from predator-prey time-series data. This method is applied to data on a protozoan predator-prey interaction, and we obtain significant evidence of predator dependence in the functional response. A crucial element in this analysis is to include time-lags in the prey and predator reproduction rates, and we show that these delays improve the fit of the model significantly. Finally, we compare the non-parametrically reconstructed functional response to parametric forms, and suggest that a modified version of the Hassell-Varley predator interference model provides a simple and flexible function for theoretical investigation and applied modelling. PMID:11467423
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
Parametric excitation in a magnetic tunnel junction-based spin torque oscillator
Dürrenfeld, P.; Iacocca, E.; Åkerman, J.; Muduli, P. K.
2014-02-03
Using microwave current injection at room temperature, we demonstrate parametric excitation of a magnetic tunnel junction (MTJ)-based spin-torque oscillator (STO). Parametric excitation is observed for currents below the auto-oscillation threshold, when the microwave current frequency f{sub e} is twice the STO free-running frequency f{sub 0}. Above threshold, the MTJ becomes parametrically synchronized. In the synchronized state, the STO exhibits an integrated power up to 5 times higher and a linewidth reduction of two orders of magnitude, compared to free-running conditions. We also show that the parametric synchronization favors single mode oscillations in the case of multimode excitation.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Parameters’ Covariance in Neutron Time of Flight Analysis – Explicit Formulae
Odyniec, M.; Blair, J.
2014-12-01
We present here a method that estimates the parameters’ variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
An Analysis of Student Satisfaction: Full-Time vs. Part-Time Students
ERIC Educational Resources Information Center
Moro-Egido, Ana I.; Panades, Judith
2010-01-01
This paper examines how full-time or part-time status affects students' level of satisfaction with their degree programs. For our analysis, we obtained data from a survey of graduate students. The survey was conducted at a public university in Spain from 2001 to 2004. The decision to undertake paid employment while studying emerges as one of the…
Parametric Modeling for Fluid Systems
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Martinez, Jonathan
2013-01-01
Fluid Systems involves different projects that require parametric modeling, which is a model that maintains consistent relationships between elements as is manipulated. One of these projects is the Neo Liquid Propellant Testbed, which is part of Rocket U. As part of Rocket U (Rocket University), engineers at NASA's Kennedy Space Center in Florida have the opportunity to develop critical flight skills as they design, build and launch high-powered rockets. To build the Neo testbed; hardware from the Space Shuttle Program was repurposed. Modeling for Neo, included: fittings, valves, frames and tubing, between others. These models help in the review process, to make sure regulations are being followed. Another fluid systems project that required modeling is Plant Habitat's TCUI test project. Plant Habitat is a plan to develop a large growth chamber to learn the effects of long-duration microgravity exposure to plants in space. Work for this project included the design and modeling of a duct vent for flow test. Parametric Modeling for these projects was done using Creo Parametric 2.0.
Optically induced parametric magnetic resonances
NASA Astrophysics Data System (ADS)
Jimenez, Ricardo; Knappe, Svenja; Kitching, John
2011-05-01
Optically pumped vector magnetometers based on zero-field resonances have reached very high sensitivities by operating at high atomic densities where dephasing due to spin-exchange collisions can be suppressed. Simplified setups, with just one laser beam have measured magnetic fields from the human brain and heart. A key feature in these magnetometers is the introduction of an rf magnetic field along the measurement axis to generate a parametric resonance. Lock-in detection of the transmitted light, at an odd harmonic of the modulation frequency, allows the reduction of the low frequency noise and generates a resonance with dispersive shape. Here we study a zero-field vector magnetometer where the parametric resonances are induced by the vector AC stark-shift of light. This approach does not produce any external magnetic field that could disturb the reading of other magnetometers in the vicinity and could provide an alternative in applications where an applied AC-field cannot be used. We have characterized the vector AC stark-shift effect of light on Rb atoms contained in a micromachined vapor cell with buffer gas. We have obtained parametric resonances induced by modulation of the light-shift. We also analyze the detunings and intensities of the light-shift beam that maintain the magnetometer within the spin-exchange relaxation-free regime.
NASA Technical Reports Server (NTRS)
Coverse, G. L.
1984-01-01
A turbine modeling technique has been developed which will enable the user to obtain consistent and rapid off-design performance from design point input. This technique is applicable to both axial and radial flow turbine with flow sizes ranging from about one pound per second to several hundred pounds per second. The axial flow turbines may or may not include variable geometry in the first stage nozzle. A user-specified option will also permit the calculation of design point cooling flow levels and corresponding changes in efficiency for the axial flow turbines. The modeling technique has been incorporated into a time-sharing program in order to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and example cases, it is suitable as a user's manual. This report is the second of a three volume set. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulation Flow Fan).
Topic- and Time-Oriented Visual Text Analysis.
Dou, Wenwen; Liu, Shixia
2016-01-01
To facilitate the process of converting textual data into actionable knowledge, visual text analysis has become a popular topic with active research efforts contributed by researchers worldwide. Here the authors present the benefits of combing text analysis (topic models in particular) with interactive visualization. They then highlight examples from prior work on topic- and time-oriented visual text analysis and discuss challenges that warrant additional future research. PMID:27514029
Parametric models of reflectance spectra for dyed fabrics
NASA Astrophysics Data System (ADS)
Aiken, Daniel C.; Ramsey, Scott; Mayo, Troy; Lambrakos, Samuel G.; Peak, Joseph
2016-05-01
This study examines parametric modeling of NIR reflectivity spectra for dyed fabrics, which provides for both their inverse and direct modeling. The dye considered for prototype analysis is triarylamine dye. The fabrics considered are camouflage textiles characterized by color variations. The results of this study provide validation of the constructed parametric models, within reasonable error tolerances for practical applications, including NIR spectral characteristics in camouflage textiles, for purposes of simulating NIR spectra corresponding to various dye concentrations in host fabrics, and potentially to mixtures of dyes.
Parametric-Resonance Ionization Cooling in Twin-Helix.
V.S. Morozov, Ya.S. Derbenev, A. Afanasev, R.P. Johnson, Erdelyi. B., J.A. Maloney
2011-09-01
Parametric-resonance Ionization Cooling (PIC) is proposed as the final 6D cooling stage of a highluminosity muon collider. For the implementation of PIC, we developed an epicyclic twin-helix channel with correlated optics. Wedge-shaped absorbers immediately followed by short rf cavities are placed into the twin-helix channel. Parametric resonances are induced in both planes using helical quadrupole harmonics. We demonstrate resonant dynamics and cooling with stochastic effects off using GEANT4/G4beamline. We illustrate compensation of spherical aberrations and benchmark COSY Infinity, a powerful tool for aberration analysis and compensation.
A human factors analysis of EVA time requirements
NASA Technical Reports Server (NTRS)
Pate, D. W.
1996-01-01
Human Factors Engineering (HFE), also known as Ergonomics, is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. A human factors motion and time study was initiated with the goal of developing a database of EVA task times and a method of utilizing the database to predict how long an ExtraVehicular Activity (EVA) should take. Initial development relied on the EVA activities performed during the STS-61 mission (Hubble repair). The first step of the analysis was to become familiar with EVAs and with the previous studies and documents produced on EVAs. After reviewing these documents, an initial set of task primitives and task time modifiers was developed. Videotaped footage of STS-61 EVAs were analyzed using these primitives and task time modifiers. Data for two entire EVA missions and portions of several others, each with two EVA astronauts, was collected for analysis. Feedback from the analysis of the data will be used to further refine the primitives and task time modifiers used. Analysis of variance techniques for categorical data will be used to determine which factors may, individually or by interactions, effect the primitive times and how much of an effect they have.
Indiana Emergent Bilingual Student Time to Reclassification: A Survival Analysis
ERIC Educational Resources Information Center
Burke, April M.; Morita-Mullaney, Trish; Singh, Malkeet
2016-01-01
In this study, we employed a discrete-time survival analysis model to examine Indiana emergent bilingual time to reclassification as fluent English proficient. The data consisted of five years of statewide English language proficiency scores. Indiana has a large and rapidly growing Spanish-speaking emergent bilingual population, and these students…
Nonlinear Analysis of Surface EMG Time Series of Back Muscles
NASA Astrophysics Data System (ADS)
Dolton, Donald C.; Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-10-01
A nonlinear analysis of surface electromyography time series of subjects with and without low back pain is presented. The mean-square displacement and entropy shows anomalous diffusive behavior on intermediate time range 10 ms < t < 1 s. This behavior implies the presence of correlations in the signal. We discuss the shape of the power spectrum of the signal.
Parametric Amplifier and Oscillator Based on Josephson Junction Circuitry
NASA Astrophysics Data System (ADS)
Yamamoto, T.; Koshino, K.; Nakamura, Y.
While the demand for low-noise amplification is ubiquitous, applications where the quantum-limited noise performance is indispensable are not very common. Microwave parametric amplifiers with near quantum-limited noise performance were first demonstrated more than 20 years ago. However, there had been little effort until recently to improve the performance or the ease of use of these amplifiers, partly because of a lack of any urgent motivation. The emergence of the field of quantum information processing in superconducting systems has changed this situation dramatically. The need to reliably read out the state of a given qubit using a very weak microwave probe within a very short time has led to renewed interest in these quantum-limited microwave amplifiers, which are already widely used as tools in this field. Here, we describe the quantum mechanical theory for one particular parametric amplifier design, called the flux-driven Josephson parametric amplifier, which we developed in 2008. The theory predicts the performance of this parametric amplifier, including its gain, bandwidth, and noise temperature. We also present the phase detection capability of this amplifier when it is operated with a pump power that is above the threshold, i.e., as a parametric phase-locked oscillator or parametron.
Cosmic slowing down of acceleration for several dark energy parametrizations
Magaña, Juan; Cárdenas, Víctor H.; Motta, Verónica E-mail: victor.cardenas@uv.cl
2014-10-01
We further investigate slowing down of acceleration of the universe scenario for five parametrizations of the equation of state of dark energy using four sets of Type Ia supernovae data. In a maximal probability analysis we also use the baryon acoustic oscillation and cosmic microwave background observations. We found the low redshift transition of the deceleration parameter appears, independently of the parametrization, using supernovae data alone except for the Union 2.1 sample. This feature disappears once we combine the Type Ia supernovae data with high redshift data. We conclude that the rapid variation of the deceleration parameter is independent of the parametrization. We also found more evidence for a tension among the supernovae samples, as well as for the low and high redshift data.
Global Nonlinear Parametric Modeling with Application to F-16 Aerodynamics
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A global nonlinear parametric modeling technique is described and demonstrated. The technique uses multivariate orthogonal modeling functions generated from the data to determine nonlinear model structure, then expands each retained modeling function into an ordinary multivariate polynomial. The final model form is a finite multivariate power series expansion for the dependent variable in terms of the independent variables. Partial derivatives of the identified models can be used to assemble globally valid linear parameter varying models. The technique is demonstrated by identifying global nonlinear parametric models for nondimensional aerodynamic force and moment coefficients from a subsonic wind tunnel database for the F-16 fighter aircraft. Results show less than 10% difference between wind tunnel aerodynamic data and the nonlinear parameterized model for a simulated doublet maneuver at moderate angle of attack. Analysis indicated that the global nonlinear parametric models adequately captured the multivariate nonlinear aerodynamic functional dependence.
Global Nonlinear Parametric Modeling with Application to F-16 Aerodynamics
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1998-01-01
A global nonlinear parametric modeling technique is described and demonstrated. The technique uses multivariate orthogonal modeling functions generated from the data to determine nonlinear model structure, then expands each retained modeling function into an ordinary multivariate polynomial. The final model form is a finite multivariate power series expansion for the dependent variable in terms of the independent variables. Partial derivatives of the identified models can be used to assemble globally valid linear parameter varying models. The technique is demonstrated by identifying global nonlinear parametric models for nondimensional aerodynamic force and moment coefficients from a subsonic wind tunnel database for the F-16 fighter aircraft. Results show less than 10% difference between wind tunnel aerodynamic data and the nonlinear parameterized model for a simulated doublet maneuver at moderate angle of attack. Analysis indicated that the global nonlinear parametric models adequately captured the multivariate nonlinear aerodynamic functional dependence.
THz-wave parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Vector processing enhancements for real-time image analysis.
Shoaf, S.; APS Engineering Support Division
2008-01-01
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
Parametric study of a target factory for laser fusion
Sherohman, J.W.; Meier, W.R.
1980-10-08
An analysis of a target factory leading to the derivation of production rate equations has provided the basis for a parametric study. Rate equations describing the production of laser fusion targets have been developed for the purpose of identifying key parameters, attractive production techniques and cost scaling relationships for a commercial target factory.
The Importance of Structure Coefficients in Parametric Analyses.
ERIC Educational Resources Information Center
Friedrich, Katherine R.
The recognition that all parametric methods are interrelated, coupled with the notion that structure coefficients are often vital in factor and canonical analyses, suggests that structure coefficients may be important in univariate analysis as well. Using a small, heuristic data set, this paper discusses the importance of structure coefficients…
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Ramona Stefanescu, Elena; Sandri, Laura; Patra, Abani; Marzocchi, Warner; Sulpizio, Roberto
2014-05-01
Probabilistic hazard assessments of Pyroclastic Density Currents (PDCs) are of great interest for decision-making purposes. However, there is a limited number of published works available on this topic. Recent advances in computation and statistical methods are offering new opportunities beyond the classical Monte Carlo (MC) sampling which is known as a simple and robust method but it usually turns out to be slow and computationally intractable. In this work, Titan2D numerical simulator has been coupled to Polynomial Chaos Quadrature (PCQ) to propagate the simulator parametric uncertainty and compute VEI-based probabilistic hazard maps of dense PDCs formed as a result of column collapse at Vesuvius volcano, Italy. Due to the lack of knowledge about the exact conditions under which these PDCs will form, Probability Distribution Functions (PDFs) are assigned to the simulator input parameters (Bed Friction Angle and Volume) according to three VEI sizes. Uniform distributions were used for both parameters since there is insufficient information to assume that any value in the range is more likely that any other value. Reasonable (and compatible) ranges for both variables were constrained according to past eruptions at Vesuvius volcanic system. On the basis of reasoning above a number of quadrature points were taken within those ranges, which resulted in one execution of the TITAN2D code at each quadrature point. With a computational cost several orders of magnitude smaller than MC, exceedance probabilities for a given threshold of flow depth (and conditional to the occurrence of VEI3, VEI4 and VEI5 eruptions) were calculated using PCQ. Moreover, PCQ can be run at different threshold values of the same output variable (flow depth, speed, kinetic energy, …) and, therefore, it can serve to compute Exceedance Probability curves (aka hazard curves) at singular points inside the hazard domain, representing the most important and useful scientific input to quantitative risk
Neuroimaging of Semantic Processing in Schizophrenia: A Parametric Priming Approach
Han, S. Duke; Wible, Cynthia G.
2009-01-01
The use of fMRI and other neuroimaging techniques in the study of cognitive language processes in psychiatric and non-psychiatric conditions has led at times to discrepant findings. Many issues complicate the study of language, especially in psychiatric populations. For example, the use of subtractive designs can produce misleading results. We propose and advocate for a semantic priming parametric approach to the study of semantic processing using fMRI methodology. Implications of this parametric approach are discussed in view of current functional neuroimaging research investigating the semantic processing disturbance of schizophrenia. PMID:19765623
Parametric Instabilities in Brusselator Reaction-Diffusion Systems
NASA Astrophysics Data System (ADS)
Geislinger, Brian; Purnell, Moriel; Kawai, Ryoichi
2002-10-01
We investigate the effects of a time-periodic parametric perturbation to a reaction diffusion system based on the Brusselator. The system becomes unstable when the pertubation frequency satisfies usual parametric resonance condition, and remains stable when the perturbation is out of resonance. When the perturbation is randomly inhomogenous, instabilites at these resonance frequencies form spatial patterns in parameter regions where the unperturbed system would be stable. However, these same patterns can be destroyed if the perturbation is homogeneous. These numerical results can be explained by two competing types of synchronizations - synchronization between neighbors (mutual synchronization) and synchronization to the perturbation (modulation synchronization).
Electro-optically spectrum tailorable intracavity optical parametric oscillator.
Chung, H P; Chang, W K; Tseng, C H; Geiss, R; Pertsch, T; Chen, Y H
2015-11-15
We report a unique, pulsed intracavity optical parametric oscillator (IOPO) whose output spectrum is electro-optically (EO) tailorable based on an aperiodically poled lithium niobate (APPLN) working simultaneously as an optical parametric gain medium and an active gain spectrum filter in the system. We have successfully obtained from the IOPO the emission of single to multiple narrow-line signal spectral peaks in a near-infrared (1531 nm) band simply by electro-optic control. The power spectral density of the EO tailored signal can be enhanced by up to 10 times over the original (nontailored) signal. PMID:26565817
Time Series Analysis of Insar Data: Methods and Trends
NASA Technical Reports Server (NTRS)
Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique
2015-01-01
Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.
Noise-enhanced Parametric Resonance in Perturbed Galaxies
NASA Astrophysics Data System (ADS)
Sideris, Ioannis V.; Kandrup, Henry E.
2004-02-01
This paper describes how parametric resonances associated with a galactic potential subjected to relatively low-amplitude, strictly periodic time-dependent perturbations can be impacted by pseudo-random variations in the pulsation frequency, modeled as colored noise. One aim thereby is to allow for the effects of a changing oscillation frequency as the density distribution associated with a galaxy evolves during violent relaxation. Another is to mimic the possible effects of internal substructures, satellite galaxies, and/or a high-density environment. The principal conclusions are that allowing for a variable frequency does not vitiate the effects of parametric resonance, and that, in at least some cases, such variations can increase the overall importance of parametric resonance associated with systematic pulsations. In memory of Professor H. E. Kandrup, a brilliant scientist, excellent teacher, and good friend. His genius and sense of humor will be greatly missed.
Airborne Methane Measurements using Optical Parametric Amplifiers
NASA Astrophysics Data System (ADS)
Riris, H.; Numata, K.; Li, S.; Wu, S.; Ramanathan, A.; Dawsey, M.; Abshire, J. B.; Kawa, S. R.; Mao, J.
2012-12-01
We report on airborne methane measurements with an active sensing instrument using widely tunable, seeded optical parametric generation (OPG). Methane is a strong greenhouse gas on Earth and it is also a potential biogenic marker on Mars and other planetary bodies. Methane in the Earth's atmosphere survives for a shorter time than CO2 but its impact on climate change can be larger than CO2. Carbon and methane emissions from land are expected to increase as permafrost melts exposing millennial-age carbon stocks to respiration (aerobic-CO2 and anaerobic-CH4) and fires. Methane emissions from clathrates in the Arctic Ocean and on land are also likely to respond to climate warming. However, there is considerable uncertainty in present Arctic flux levels, as well as how fluxes will change with the changing environment and more measurements are needed. In this paper we report on an airborne demonstration of atmospheric methane column optical depth measurements at 1.65 μm using widely tunable, seeded optical parametric amplifier (OPA) and a photon counting detector. Our results show good agreement between the experimentally derived optical depth measurements and theoretical calculations and follow the expected changes for aircraft altitudes from 3 to 11 km. The technique has also been used to measure carbon dioxide and monoxide, water vapor, and other trace gases in the near and mid-infrared spectral regions on the ground.
Spacelab mission dependent training parametric resource requirements study
NASA Technical Reports Server (NTRS)
Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.
1976-01-01
Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.
Multidimensional Scaling Visualization Using Parametric Entropy
NASA Astrophysics Data System (ADS)
Lopes, António M.; Tenreiro Machado, J. A.; Galhano, Alexandra M.
2015-12-01
This paper studies complex systems using a generalized multidimensional scaling (MDS) technique. Complex systems are characterized by time-series responses, interpreted as a manifestation of their dynamics. Two types of time-series are analyzed, namely 18 stock markets and the gross domestic product per capita of 18 countries. For constructing the MDS charts, indices based on parametric entropies are adopted. Multiparameter entropies allow the variation of the parameters leading to alternative sets of charts. The final MDS maps are then assembled by means of Procrustes’ method that maximizes the fit between the individual charts. Therefore, the proposed method can be interpreted as a generalization to higher dimensions of the standard technique that represents (and discretizes) items by means of single “points” (i.e. zero-dimensional “objects”). The MDS plots, involving one-, two- and three-dimensional “objects”, reveal a good performance in capturing the correlations between data.
The scaling of time series size towards detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Gao, Xiaolei; Ren, Liwei; Shang, Pengjian; Feng, Guochen
2016-06-01
In this paper, we introduce a modification of detrended fluctuation analysis (DFA), called multivariate DFA (MNDFA) method, based on the scaling of time series size N. In traditional DFA method, we obtained the influence of the sequence segmentation interval s, and it inspires us to propose a new model MNDFA to discuss the scaling of time series size towards DFA. The effectiveness of the procedure is verified by numerical experiments with both artificial and stock returns series. Results show that the proposed MNDFA method contains more significant information of series compared to traditional DFA method. The scaling of time series size has an influence on the auto-correlation (AC) in time series. For certain series, we obtain an exponential relationship, and also calculate the slope through the fitting function. Our analysis and finite-size effect test demonstrate that an appropriate choice of the time series size can avoid unnecessary influences, and also make the testing results more accurate.
Analysis of the temporal properties in car accident time series
NASA Astrophysics Data System (ADS)
Telesca, Luciano; Lovallo, Michele
2008-05-01
In this paper we study the time-clustering behavior of sequences of car accidents, using data from a freely available database in the internet. The Allan Factor analysis, which is a well-suited method to investigate time-dynamical behaviors in point processes, has revealed that the car accident sequences are characterized by a general time-scaling behavior, with the presence of cyclic components. These results indicate that the time dynamics of the events are not Poissonian but long range correlated with periodicities ranging from 12 h to 1 year.
Multiple time scale complexity analysis of resting state FMRI.
Smith, Robert X; Yan, Lirong; Wang, Danny J J
2014-06-01
The present study explored multi-scale entropy (MSE) analysis to investigate the entropy of resting state fMRI signals across multiple time scales. MSE analysis was developed to distinguish random noise from complex signals since the entropy of the former decreases with longer time scales while the latter signal maintains its entropy due to a "self-resemblance" across time scales. A long resting state BOLD fMRI (rs-fMRI) scan with 1000 data points was performed on five healthy young volunteers to investigate the spatial and temporal characteristics of entropy across multiple time scales. A shorter rs-fMRI scan with 240 data points was performed on a cohort of subjects consisting of healthy young (age 23 ± 2 years, n = 8) and aged volunteers (age 66 ± 3 years, n = 8) to investigate the effect of healthy aging on the entropy of rs-fMRI. The results showed that MSE of gray matter, rather than white matter, resembles closely that of f (-1) noise over multiple time scales. By filtering out high frequency random fluctuations, MSE analysis is able to reveal enhanced contrast in entropy between gray and white matter, as well as between age groups at longer time scales. Our data support the use of MSE analysis as a validation metric for quantifying the complexity of rs-fMRI signals.
NASA Astrophysics Data System (ADS)
Saffin, Leo; Methven, John; Gray, Sue
2016-04-01
Numerical models of the atmosphere combine a dynamical core, which approximates solutions to the adiabatic and frictionless governing equations, with the tendencies arising from the parametrization of physical processes. Tracers of potential vorticity (PV) can be used to accumulate the tendencies of parametrized physical processes and diagnose their impacts on the large-scale dynamics. This is due to two key properties of PV, conservation following an air mass and invertibility which relates the PV distribution to the balanced dynamics of the atmosphere. Applying the PV tracers to many short forecasts allows for a systematic investigation of the behaviour of parametrized physical processes. The forecasts are 2.5 day lead time forecasts run using the Met Office Unified Model (MetUM) initialised at 0Z for each day in November/December/January 2013/14. The analysis of the PV tracers has been focussed on regions where diabatic processes can be important (tropopause ridges and troughs, frontal regions and the boundary layer top). The tropopause can be described as a surface of constant PV with a sharp PV gradient. Previous work using the PV tracers in individual case studies has shown that parametrized physical processes act to enhance the tropopause PV contrast which can affect the Rossby wave phase speed. The short forecasts show results consistent with a systematic enhancement of tropopause PV contrast by diabatic processes and show systematically different behaviour between ridges and troughs. The implication of this work is that a failure to correctly represent the effects of diabatic processes on the tropopause in models can lead to poor Rossby wave evolution and potentially downstream forecast busts.
Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran
2013-01-01
Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143
Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran
2013-01-01
Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ΔT Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ΔT Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143
Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran
2013-05-17
Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.
Stimulated Parametric Emission Microscope Systems
NASA Astrophysics Data System (ADS)
Itoh, Kazuyoshi; Isobe, Keisuke
2006-10-01
We present a novel microscopy technique based on the fourwave mixing (FWM) process that is enhanced by two-photon electronic resonance induced by a pump pulse along with stimulated emission induced by a dump pulse. A Ti:sapphire laser and an optical parametric oscillator are used as light sources for the pump and dump pulses, respectively. We demonstrate that our FWM technique can be used to obtain two-dimensional microscopic images of an unstained leaf of Camellia sinensis and an unlabeled tobacco BY2 Cell.
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach. PMID:19736969
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
State space analysis of timing: exploiting task redundancy to reduce sensitivity to timing
Cohen, Rajal G.
2012-01-01
Timing is central to many coordinated actions, and the temporal accuracy of central nervous system commands presents an important limit to skilled performance. Using target-oriented throwing in a virtual environment as an example task, this study presents a novel analysis that quantifies contributions of timing accuracy and shaping of hand trajectories to performance. Task analysis reveals that the result of a throw is fully determined by the projectile position and velocity at release; zero error can be achieved by a manifold of position and velocity combinations (solution manifold). Four predictions were tested. 1) Performers learn to release the projectile closer to the optimal moment for a given arm trajectory, achieving timing accuracy levels similar to those reported in other timing tasks (∼10 ms). 2) Performers develop a hand trajectory that follows the solution manifold such that zero error can be achieved without perfect timing. 3) Skilled performers exploit both routes to improvement more than unskilled performers. 4) Long-term improvement in skilled performance relies on continued optimization of the arm trajectory as timing limits are reached. Average and skilled subjects practiced for 6 and 15 days, respectively. In 6 days, both timing and trajectory alignment improved for all subjects, and skilled subjects showed an advantage in timing. With extended practice, performance continued to improve due to continued shaping of the trajectory, whereas timing accuracy reached an asymptote at 9 ms. We conclude that skilled subjects first maximize timing accuracy and then optimize trajectory shaping to compensate for intrinsic limitations of timing accuracy. PMID:22031769
Analysis of time-domain scattering by periodic structures
NASA Astrophysics Data System (ADS)
Gao, Yixian; Li, Peijun
2016-11-01
This paper is devoted to the mathematical analysis of a time-domain electromagnetic scattering by periodic structures which are known as diffraction gratings. The scattering problem is reduced equivalently into an initial-boundary value problem in a bounded domain by using an exact transparent boundary condition. The well-posedness and stability of the solution are established for the reduced problem. Moreover, a priori energy estimates are obtained with minimum regularity requirement for the data and explicit dependence on the time.
Clustering Financial Time Series by Network Community Analysis
NASA Astrophysics Data System (ADS)
Piccardi, Carlo; Calatroni, Lisa; Bertoni, Fabio
In this paper, we describe a method for clustering financial time series which is based on community analysis, a recently developed approach for partitioning the nodes of a network (graph). A network with N nodes is associated to the set of N time series. The weight of the link (i, j), which quantifies the similarity between the two corresponding time series, is defined according to a metric based on symbolic time series analysis, which has recently proved effective in the context of financial time series. Then, searching for network communities allows one to identify groups of nodes (and then time series) with strong similarity. A quantitative assessment of the significance of the obtained partition is also provided. The method is applied to two distinct case-studies concerning the US and Italy Stock Exchange, respectively. In the US case, the stability of the partitions over time is also thoroughly investigated. The results favorably compare with those obtained with the standard tools typically used for clustering financial time series, such as the minimal spanning tree and the hierarchical tree.
The semantics of time and space: a thematic analysis.
Pollio, Howard R; Jensen, Peter R; O'Neil, Michael A
2014-02-01
Three empirical studies and one analysis of pre-existing data were performed to determine the everyday meanings of time, a major component of phenomenological analyses of human experience. To this end, participants in Studies 1 and 2 sorted time related words into groups having similar meanings, with these groups then evaluated by hierarchical clustering procedures. Results of Studies 1 and 2 produced similar clustering patterns suggesting it was possible to define the everyday meanings of time in terms of experiences of change and continuity, linear organization, tempo, and boundaries. Results of Study 3 indicated little or no effect on clustering patterns of time words when space words also were included in the set of items to be sorted. Concerns about the size and representativeness of the words used as stimuli in Studies 1, 2 and 3 led to an analysis of over 2,000 words falling under the general heading of Time in Roget's Thesaurus. Results of this analysis revealed that clusters comparable to those obtained in the other three studies also appeared in these data. These results were discussed in terms of their implications for the way in which "invisible" concepts such as time are thought about and used, particularly as related to figurative expression.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Stability over time: Is behavior analysis a trait psychology?
Vyse, Stuart
2004-01-01
Historically, behavior analysis and trait psychology have had little in common; however, recent developments in behavior analysis bring it closer to one of the core assumptions of the trait approach: the stability of behavior over time and, to a lesser extent, environments. The introduction of the concept of behavioral momentum and, in particular, the development of molar theories have produced some common features and concerns. Behavior-analytic theories of stability provide improved explanations of many everyday phenomena and make possible the expansion of behavior analysis into areas that have been inadequately addressed. ImagesFigure 1 PMID:22478416
Triminimal parametrization of quark mixing matrix
NASA Astrophysics Data System (ADS)
He, Xiao-Gang; Li, Shi-Wen; Ma, Bo-Qiang
2008-12-01
Starting from a new zeroth order basis for quark mixing (CKM) matrix based on the quark-lepton complementarity and the tribimaximal pattern of lepton mixing, we derive a triminimal parametrization of a CKM matrix with three small angles and a CP-violating phase as its parameters. This new triminimal parametrization has the merits of fast convergence and simplicity in application. With the quark-lepton complementary relations, we derive relations between the two unified triminimal parametrizations for quark mixing obtained in this work and for lepton mixing obtained by Pakvasa-Rodejohann-Weiler. Parametrization deviating from quark-lepton complementarity is also discussed.
Time-resolved scanning electron microscopy with polarization analysis
NASA Astrophysics Data System (ADS)
Frömter, Robert; Kloodt, Fabian; Rößler, Stefan; Frauen, Axel; Staeck, Philipp; Cavicchia, Demetrio R.; Bocklage, Lars; Röbisch, Volker; Quandt, Eckhard; Oepen, Hans Peter
2016-04-01
We demonstrate the feasibility of investigating periodically driven magnetization dynamics in a scanning electron microscope with polarization analysis based on spin-polarized low-energy electron diffraction. With the present setup, analyzing the time structure of the scattering events, we obtain a temporal resolution of 700 ps, which is demonstrated by means of imaging the field-driven 100 MHz gyration of the vortex in a soft-magnetic FeCoSiB square. Owing to the efficient intrinsic timing scheme, high-quality movies, giving two components of the magnetization simultaneously, can be recorded on the time scale of hours.
Time-frequency analysis with the continuous wavelet transform
NASA Astrophysics Data System (ADS)
Lang, W. Christopher; Forinash, Kyle
1998-09-01
The continuous wavelet transform can be used to produce spectrograms which show the frequency content of sounds (or other signals) as a function of time in a manner analogous to sheet music. While this technique is commonly used in the engineering community for signal analysis, the physics community has, in our opinion, remained relatively unaware of this development. Indeed, some find the very notion of frequency as a function of time troublesome. Here spectrograms will be displayed for familiar sounds whose pitches change with time, demonstrating the usefulness of the continuous wavelet transform.
Sensitivity analysis of dynamic biological systems with time-delays
2010-01-01
Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex
Parametric study of double cellular detonation structure
NASA Astrophysics Data System (ADS)
Khasainov, B.; Virot, F.; Presles, H.-N.; Desbordes, D.
2013-05-01
A parametric numerical study is performed of a detonation cellular structure in a model gaseous explosive mixture whose decomposition occurs in two successive exothermic reaction steps with markedly different characteristic times. Kinetic and energetic parameters of both reactions are varied in a wide range in the case of one-dimensional steady and two-dimensional (2D) quasi-steady self-supported detonations. The range of governing parameters of both exothermic steps is defined where a "marked" double cellular structure exists. It is shown that the two-level cellular structure is completely governed by the kinetic parameters and the local overdrive ratio of the detonation front propagating inside large cells. Furthermore, since it is quite cumbersome to use detailed chemical kinetics in unsteady 2D case, the proposed work should help to identify the mixtures and the domain of their equivalence ratio where double detonation structure could be observed.
ADAPTIVE DATA ANALYSIS OF COMPLEX FLUCTUATIONS IN PHYSIOLOGIC TIME SERIES
PENG, C.-K.; COSTA, MADALENA; GOLDBERGER, ARY L.
2009-01-01
We introduce a generic framework of dynamical complexity to understand and quantify fluctuations of physiologic time series. In particular, we discuss the importance of applying adaptive data analysis techniques, such as the empirical mode decomposition algorithm, to address the challenges of nonlinearity and nonstationarity that are typically exhibited in biological fluctuations. PMID:20041035
Analysis of Complex Intervention Effects in Time-Series Experiments.
ERIC Educational Resources Information Center
Bower, Cathleen
An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…
Efficient primary and parametric resonance excitation of bistable resonators
NASA Astrophysics Data System (ADS)
Ramini, A.; Alcheikh, N.; Ilyas, S.; Younis, M. I.
2016-09-01
We experimentally demonstrate an efficient approach to excite primary and parametric (up to the 4th) resonance of Microelectromechanical system MEMS arch resonators with large vibrational amplitudes. A single crystal silicon in-plane arch microbeam is fabricated such that it can be excited axially from one of its ends by a parallel-plate electrode. Its micro/nano scale vibrations are transduced using a high speed camera. Through the parallel-plate electrode, a time varying electrostatic force is applied, which is converted into a time varying axial force that modulates dynamically the stiffness of the arch resonator. Due to the initial curvature of the structure, not only parametric excitation is induced, but also primary resonance. Experimental investigation is conducted comparing the response of the arch near primary resonance using the axial excitation to that of a classical parallel-plate actuation where the arch itself forms an electrode. The results show that the axial excitation can be more efficient and requires less power for primary resonance excitation. Moreover, unlike the classical method where the structure is vulnerable to the dynamic pull-in instability, the axial excitation technique can provide large amplitude motion while protecting the structure from pull-in. In addition to primary resonance, parametrical resonances are demonstrated at twice, one-half, and two-thirds the primary resonance frequency. The ability to actuate primary and/or parametric resonances can serve various applications, such as for resonator based logic and memory devices.
Rupture and healing of one-dimensional chains in a parametric magnetic ratchet potential
NASA Astrophysics Data System (ADS)
Tierno, Pietro; Reddy, Sathavaram V.; Johansen, Tom H.; Fischer, Thomas M.
2007-04-01
Transverse paramagnetic particle chains parametrically driven by a magnetic ratchet potential rupture and heal upon collision with an obstacle. The overdamped transverse dynamics is frozen during the time the particles stay in the ratchet potential wells and kicked during the time the particles hop to the next well. On time scales large compared to the parametric modulation period the healing of the hole in the chain is determined by dipolar repulsion and hydrodynamic friction of the paramagnetic particles.
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Improvements in Accurate GPS Positioning Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Koyama, Yuichiro; Tanaka, Toshiyuki
Although the Global Positioning System (GPS) is used widely in car navigation systems, cell phones, surveying, and other areas, several issues still exist. We focus on the continuous data received in public use of GPS, and propose a new positioning algorithm that uses time series analysis. By fitting an autoregressive model to the time series model of the pseudorange, we propose an appropriate state-space model. We apply the Kalman filter to the state-space model and use the pseudorange estimated by the filter in our positioning calculations. The results of the authors' positioning experiment show that the accuracy of the proposed method is much better than that of the standard method. In addition, as we can obtain valid values estimated by time series analysis using the state-space model, the proposed state-space model can be applied to several other fields.
Time Series Analysis and Prediction of AE and Dst Data
NASA Astrophysics Data System (ADS)
Takalo, J.; Lohikiski, R.; Timonen, J.; Lehtokangas, M.; Kaski, K.
1996-12-01
A new method to analyse the structure function has been constructed and used in the analysis of the AE time series for the years 1978-85 and Dst time series for 1957-84. The structure function (SF) was defined by S(l) = <|x(ti + lDt) - x(ti)|>, where Dt is the sampling time, l is an integer, and <|.|> denotes the average of absolute values. If a time series is self-affine its SF should scale for small values of l as S(l) is proportional to lH, where 0 < H < 1 is called the scaling exponent. It is known that for power-law (coloured) noise, which has P ~ f-a, a ~ 2H + 1 for 1 < a < 3. In this work the scaling exponent H was analysed by considering the local slopes dlog(S(l))/dlog(l) between two adjacent points as a function of l. For self-affine time series the local slopes should stay constant, at least for small values of l. The AE time series was found to be affine such that the scaling exponent changes at a time scale of 113 (+/-9) minutes. On the other hand, in the SF function analysis, the Dst data were dominated by the 24-hour and 27-day periods. The 27-day period was further modulated by the annual variation. These differences between the two time series arise from the difference in their periodicities in relation to their respective characteristic time scales. In the AE data the dominating periods are longer than that related to the characteristic time scale, i.e. they appear in the flatter part of the power spectrum. This is why the affinity is the dominating feature of the AE time series. In contrast with this the dominating periods of the Dst data are shorter than the characteristic time scale, and appear in the steeper part of the spectrum. Consequently periodicity is the dominating feature of the Dst data. Because of their different dynamic characteristics, prediction of Dst and AE time series appear to presuppose rather different approaches. In principle it is easier to produce the gross features of the Dst time series correctly as it is periodicity
A Human Factors Analysis of EVA Time Requirements
NASA Technical Reports Server (NTRS)
Pate, Dennis W.
1997-01-01
Human Factors Engineering (HFE) is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. During the summer of 1995, a human factors motion and time study was initiated with the goals of developing a database of EVA task times and developing a method of utilizing the database to predict how long an EVA should take. Initial development relied on the EVA activities performed during the STS-61 (Hubble) mission. The first step of the study was to become familiar with EVA's, the previous task-time studies, and documents produced on EVA's. After reviewing these documents, an initial set of task primitives and task-time modifiers was developed. Data was collected from videotaped footage of two entire STS-61 EVA missions and portions of several others, each with two EVA astronauts. Feedback from the analysis of the data was used to further refine the primitives and modifiers used. The project was continued during the summer of 1996, during which data on human errors was also collected and analyzed. Additional data from the STS-71 mission was also collected. Analysis of variance techniques for categorical data was used to determine which factors may affect the primitive times and how much of an effect they have. Probability distributions for the various task were also generated. Further analysis of the modifiers and interactions is planned.
Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation
NASA Astrophysics Data System (ADS)
Pentaris, Fragkiskos P.; Fouskitakis, George N.
2014-05-01
The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5
Variance Based Measure for Optimization of Parametric Realignment Algorithms
Mehring, Carsten
2016-01-01
Neuronal responses to sensory stimuli or neuronal responses related to behaviour are often extracted by averaging neuronal activity over large number of experimental trials. Such trial-averaging is carried out to reduce noise and to diminish the influence of other signals unrelated to the corresponding stimulus or behaviour. However, if the recorded neuronal responses are jittered in time with respect to the corresponding stimulus or behaviour, averaging over trials may distort the estimation of the underlying neuronal response. Temporal jitter between single trial neural responses can be partially or completely removed using realignment algorithms. Here, we present a measure, named difference of time-averaged variance (dTAV), which can be used to evaluate the performance of a realignment algorithm without knowing the internal triggers of neural responses. Using simulated data, we show that using dTAV to optimize the parameter values for an established parametric realignment algorithm improved its efficacy and, therefore, reduced the jitter of neuronal responses. By removing the jitter more effectively and, therefore, enabling more accurate estimation of neuronal responses, dTAV can improve analysis and interpretation of the neural responses. PMID:27159490
Satellite time series analysis using Empirical Mode Decomposition
NASA Astrophysics Data System (ADS)
Pannimpullath, R. Renosh; Doolaeghe, Diane; Loisel, Hubert; Vantrepotte, Vincent; Schmitt, Francois G.
2016-04-01
Geophysical fields possess large fluctuations over many spatial and temporal scales. Satellite successive images provide interesting sampling of this spatio-temporal multiscale variability. Here we propose to consider such variability by performing satellite time series analysis, pixel by pixel, using Empirical Mode Decomposition (EMD). EMD is a time series analysis technique able to decompose an original time series into a sum of modes, each one having a different mean frequency. It can be used to smooth signals, to extract trends. It is built in a data-adaptative way, and is able to extract information from nonlinear signals. Here we use MERIS Suspended Particulate Matter (SPM) data, on a weekly basis, during 10 years. There are 458 successive time steps. We have selected 5 different regions of coastal waters for the present study. They are Vietnam coastal waters, Brahmaputra region, St. Lawrence, English Channel and McKenzie. These regions have high SPM concentrations due to large scale river run off. Trend and Hurst exponents are derived for each pixel in each region. The energy also extracted using Hilberts Spectral Analysis (HSA) along with EMD method. Normalised energy computed for each mode for each region with the total energy. The total energy computed using all the modes are extracted using EMD method.
Metagenomics meets time series analysis: unraveling microbial community dynamics.
Faust, Karoline; Lahti, Leo; Gonze, Didier; de Vos, Willem M; Raes, Jeroen
2015-06-01
The recent increase in the number of microbial time series studies offers new insights into the stability and dynamics of microbial communities, from the world's oceans to human microbiota. Dedicated time series analysis tools allow taking full advantage of these data. Such tools can reveal periodic patterns, help to build predictive models or, on the contrary, quantify irregularities that make community behavior unpredictable. Microbial communities can change abruptly in response to small perturbations, linked to changing conditions or the presence of multiple stable states. With sufficient samples or time points, such alternative states can be detected. In addition, temporal variation of microbial interactions can be captured with time-varying networks. Here, we apply these techniques on multiple longitudinal datasets to illustrate their potential for microbiome research.
Mixed Spectrum Analysis on fMRI Time-Series.
Kumar, Arun; Lin, Feng; Rajapakse, Jagath C
2016-06-01
Temporal autocorrelation present in functional magnetic resonance image (fMRI) data poses challenges to its analysis. The existing approaches handling autocorrelation in fMRI time-series often presume a specific model of autocorrelation such as an auto-regressive model. The main limitation here is that the correlation structure of voxels is generally unknown and varies in different brain regions because of different levels of neurogenic noises and pulsatile effects. Enforcing a universal model on all brain regions leads to bias and loss of efficiency in the analysis. In this paper, we propose the mixed spectrum analysis of the voxel time-series to separate the discrete component corresponding to input stimuli and the continuous component carrying temporal autocorrelation. A mixed spectral analysis technique based on M-spectral estimator is proposed, which effectively removes autocorrelation effects from voxel time-series and identify significant peaks of the spectrum. As the proposed method does not assume any prior model for the autocorrelation effect in voxel time-series, varying correlation structure among the brain regions does not affect its performance. We have modified the standard M-spectral method for an application on a spatial set of time-series by incorporating the contextual information related to the continuous spectrum of neighborhood voxels, thus reducing considerably the computation cost. Likelihood of the activation is predicted by comparing the amplitude of discrete component at stimulus frequency of voxels across the brain by using normal distribution and modeling spatial correlations among the likelihood with a conditional random field. We also demonstrate the application of the proposed method in detecting other desired frequencies.
A new parametric approach for wind profiling with Doppler Radar
NASA Astrophysics Data System (ADS)
Le Foll, GwenaëLle; Larzabal, Pascal; Clergeot, Henri; Petitdidier, Monique
1997-07-01
In this paper, we propose a new approach for wind profile extraction with Doppler radar. To perform this, we first focus on the analysis and modeling of VHF or UHF waves backscattered by clear-air turbulence. A physical description of the backscattered wave is given. This description involves a spectral model that includes a parametric profile of the Doppler spectrum. A parametric approach of the wind profile can be easily generated. The sounding volume is divided into slabs whose thickness is consistent with that of the expected homogeneous turbulent layer. The echo spectrum of each slab is supposed Gaussian. Thus, for the range gate, the backscattered spectrum is a priori non-Gaussian, since it is weighted by a nonconstant reflectivity. This represents a more realistic assumption than the classical ones. The realistic temporal model thereby obtained can be used in simulation, which provides a valable tool for testing the extraction algorithm. An original recursive fitting, in terms of maximum likelihood, between the experimentally recorded spectrum and the parametric candidate spectrum is described and implemented as a second-order, steepest-descent algorithm. This optimization problem is solved in a weighted fashion on the entire gate simultaneously. The regularized parametric method, described in this paper, is a way to minimize some of the drawbacks encountered with traditional methods. Simulations reveal good statistical performance compared with traditional methods. The algorithm is then tested on real data. To achieve this, original methods are proposed for noise suppression and clutter removal.
Parametric Cost Estimates for an International Competitive Edge
Murphy, L.T.
2006-07-01
This paper summarizes the progress to date by CH2M HILL and the UKAEA in development of a parametric modelling capability for estimating the costs of large nuclear decommissioning projects in the United Kingdom (UK) and Europe. The ability to successfully apply parametric cost estimating techniques will be a key factor to commercial success in the UK and European multi-billion dollar waste management, decommissioning and environmental restoration markets. The most useful parametric models will be those that incorporate individual components representing major elements of work: reactor decommissioning, fuel cycle facility decommissioning, waste management facility decommissioning and environmental restoration. Models must be sufficiently robust to estimate indirect costs and overheads, permit pricing analysis and adjustment, and accommodate the intricacies of international monetary exchange, currency fluctuations and contingency. The development of a parametric cost estimating capability is also a key component in building a forward estimating strategy. The forward estimating strategy will enable the preparation of accurate and cost-effective out-year estimates, even when work scope is poorly defined or as yet indeterminate. Preparation of cost estimates for work outside the organizations current sites, for which detailed measurement is not possible and historical cost data does not exist, will also be facilitated. (authors)
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-01
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting.
Numerical bifurcation analysis of immunological models with time delays
NASA Astrophysics Data System (ADS)
Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady
2005-12-01
In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.