Vivancos, E; Healy, C; Mueller, F; Whalley, D
2001-05-09
Embedded systems often have real-time constraints. Traditional timing analysis statically determines the maximum execution time of a task or a program in a real-time system. These systems typically depend on the worst-case execution time of tasks in order to make static scheduling decisions so that tasks can meet their deadlines. Static determination of worst-case execution times imposes numerous restrictions on real-time programs, which include that the maximum number of iterations of each loop must be known statically. These restrictions can significantly limit the class of programs that would be suitable for a real-time embedded system. This paper describes work-in-progress that uses static timing analysis to aid in making dynamic scheduling decisions. For instance, different algorithms with varying levels of accuracy may be selected based on the algorithm's predicted worst-case execution time and the time allotted for the task. We represent the worst-case execution time of a function or a loop as a formula, where the unknown values affecting the execution time are parameterized. This parametric timing analysis produces formulas that can then be quickly evaluated at run-time so dynamic scheduling decisions can be made with little overhead. Benefits of this work include expanding the class of applications that can be used in a real-time system, improving the accuracy of dynamic scheduling decisions, and more effective utilization of system resources. This paper describes how static timing analysis can be used to aid in making dynamic scheduling decisions. The WCET of a function or a loop is represented as a formula, where the values affecting the execution time are parameterized. Such formulas can then be quickly evaluated at run-time so dynamic scheduling decisions can be made when scheduling a task or choosing algorithms within a task. Benefits of this parametric timing analysis include expanding the class of applications that can be used in a real-time system
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Parametric Time-Scale Methods in Signal Analysis
1993-06-30
for yi is akin to that of z,, in (4), we can write down the determinant of m Using the definition of zx in (4), we can decompose X.,(%) Yt(a) using (7...Signal from speech vowel /oo/, filtered to retain only the first two formant frequencies. (b) Estimated frequency tracks of the first and aecod...the cosinusoidal e analysis We apply this method to a speechand~pec analysiis. Wemoi coppnelt thi metho teo a speech and sinusoidal harmonic components
Time-varying linear and nonlinear parametric model for Granger causality analysis.
Li, Yang; Wei, Hua-Liang; Billings, Steve A; Liao, Xiao-Feng
2012-04-01
Statistical measures such as coherence, mutual information, or correlation are usually applied to evaluate the interactions between two or more signals. However, these methods cannot distinguish directions of flow between two signals. The capability to detect causalities is highly desirable for understanding the cooperative nature of complex systems. The main objective of this work is to present a linear and nonlinear time-varying parametric modeling and identification approach that can be used to detect Granger causality, which may change with time and may not be detected by traditional methods. A numerical example, in which the exact causal influences relationships, is presented to illustrate the performance of the method for time-varying Granger causality detection. The approach is applied to EEG signals to track and detect hidden potential causalities. One advantage of the proposed model, compared with traditional Granger causality, is that the results are easier to interpret and yield additional insights into the transient directed dynamical Granger causality interactions.
NASA Astrophysics Data System (ADS)
Choi, Jongseong
The performance of a hypersonic flight vehicle will depend on existing materials and fuels; this work presents the performance of the ideal scramjet engine for three different combustion chamber materials and three different candidate fuels. Engine performance is explored by parametric cycle analysis for the ideal scramjet as a function of material maximum service temperature and the lower heating value of jet engine fuels. The thermodynamic analysis is based on the Brayton cycle as similarly employed in describing the performance of the ramjet, turbojet, and fanjet ideal engines. The objective of this work is to explore material operating temperatures and fuel possibilities for the combustion chamber of a scramjet propulsion system to show how they relate to scramjet performance and the seven scramjet engine parameters: specific thrust, fuel-to-air ratio, thrust-specific fuel consumption, thermal efficiency, propulsive efficiency, overall efficiency, and thrust flux. The information presented in this work has not been done by others in the scientific literature. This work yields simple algebraic equations for scramjet performance which are similar to that of the ideal ramjet, ideal turbojet and ideal turbofan engines.
Luo, Sheng
2014-02-20
Impairment caused by Parkinson's disease (PD) is multidimensional (e.g., sensoria, functions, and cognition) and progressive. Its multidimensional nature precludes a single outcome to measure disease progression. Clinical trials of PD use multiple categorical and continuous longitudinal outcomes to assess the treatment effects on overall improvement. A terminal event such as death or dropout can stop the follow-up process. Moreover, the time to the terminal event may be dependent on the multivariate longitudinal measurements. In this article, we consider a joint random-effects model for the correlated outcomes. A multilevel item response theory model is used for the multivariate longitudinal outcomes and a parametric accelerated failure time model is used for the failure time because of the violation of proportional hazard assumption. These two models are linked via random effects. The Bayesian inference via MCMC is implemented in 'BUGS' language. Our proposed method is evaluated by a simulation study and is applied to DATATOP study, a motivating clinical trial to determine if deprenyl slows the progression of PD. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
Parametric functional principal component analysis.
Sang, Peijun; Wang, Liangliang; Cao, Jiguo
2017-09-01
Functional principal component analysis (FPCA) is a popular approach in functional data analysis to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). Most existing FPCA approaches use a set of flexible basis functions such as B-spline basis to represent the FPCs, and control the smoothness of the FPCs by adding roughness penalties. However, the flexible representations pose difficulties for users to understand and interpret the FPCs. In this article, we consider a variety of applications of FPCA and find that, in many situations, the shapes of top FPCs are simple enough to be approximated using simple parametric functions. We propose a parametric approach to estimate the top FPCs to enhance their interpretability for users. Our parametric approach can also circumvent the smoothing parameter selecting process in conventional nonparametric FPCA methods. In addition, our simulation study shows that the proposed parametric FPCA is more robust when outlier curves exist. The parametric FPCA method is demonstrated by analyzing several datasets from a variety of applications. © 2017, The International Biometric Society.
Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allan
2003-01-01
Because twisted coronal features are important proxies for predicting solar eruptive events, and, yet not clearly understood, we present new results to resolve the complex, non-potential magnetic field configurations of active regions. This research uses free-form deformation mathematics to generate the associated coronal magnetic field. We use a parametric representation of the magnetic field lines such that the field lines can be manipulated to match the structure of EUV and SXR coronal loops. The objective is to derive sigmoidal magnetic field solutions which allows the beta greater than 1 regions to be included, aligned and non-aligned electric currents to be calculated, and the Lorentz force to be determined. The advantage of our technique is that the solution is independent of the unknown upper and side boundary conditions, allows non-vanishing magnetic forces, and provides a global magnetic field solution, which contains high- and low-beta regimes and is consistent with all the coronal images of the region. We show that the mathematical description is unique and physical.
Lin, Sheng-Hsuan; Young, Jessica; Logan, Roger; Tchetgen Tchetgen, Eric J; VanderWeele, Tyler J
2017-03-01
The assessment of direct and indirect effects with time-varying mediators and confounders is a common but challenging problem, and standard mediation analysis approaches are generally not applicable in this context. The mediational g-formula was recently proposed to address this problem, paired with a semiparametric estimation approach to evaluate longitudinal mediation effects empirically. In this article, we develop a parametric estimation approach to the mediational g-formula, including a feasible algorithm implemented in a freely available SAS macro. In the Framingham Heart Study data, we apply this method to estimate the interventional analogues of natural direct and indirect effects of smoking behaviors sustained over a 10-year period on blood pressure when considering weight change as a time-varying mediator. Compared with not smoking, smoking 20 cigarettes per day for 10 years was estimated to increase blood pressure by 1.2 mm Hg (95% CI: -0.7, 2.7). The direct effect was estimated to increase blood pressure by 1.5 mm Hg (95% CI: -0.3, 2.9), and the indirect effect was -0.3 mm Hg (95% CI: -0.5, -0.1), which is negative because smoking which is associated with lower weight is associated in turn with lower blood pressure. These results provide evidence that weight change in fact partially conceals the detrimental effects of cigarette smoking on blood pressure. Our study represents, to our knowledge, the first application of the parametric mediational g-formula in an epidemiologic cohort study (see video abstract at, http://links.lww.com/EDE/B159.).
2007-11-02
du Sommeil , Nice, FRANCE Abstract - The study of the electroencephalographic (EEG) sig- nal contributes to sleep analysis. In the...lis, 1999, France. [2] O.Meste, A. Amargos, G. Suisse, H. Rix, “Détection automatique de fuseaux de sommeil à l’aide de représentations temps
Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.
2012-01-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
NASA Astrophysics Data System (ADS)
De la Sen, M.
2008-08-01
This paper discusses linear fractional representations (LFR) of parameter-dependent nonlinear systems with real-rational nonlinearities and point-delayed dynamics. Sufficient conditions for robust global asymptotic stability independent of the delays and the existence of a robust stabilizing gain-scheduled dynamic controller are investigated via linear matrix inequalities. Such inequalities are obtained from the values of the time-derivatives of appropriate Lyapunov functions at all the vertices of the polytope which contains the parametrized uncertainties. The synthesized stabilizing controller consists of an interpolation being performed with the stabilizing controllers at the set of vertices of a certain polytope where the nonlinear-rational parametrization belongs to. Some extensions are also given concerning robust global asymptotic stability dependent of the delays. Numerical examples corroborate the usefulness of the proposed formalism and its applicability to practical related problems.
Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Buchhave, Lars A.; Collaboration: Kepler Science Team; and others
2012-05-10
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Robustness analysis for real parametric uncertainty
NASA Technical Reports Server (NTRS)
Sideris, Athanasios
1989-01-01
Some key results in the literature in the area of robustness analysis for linear feedback systems with structured model uncertainty are reviewed. Some new results are given. Model uncertainty is described as a combination of real uncertain parameters and norm bounded unmodeled dynamics. Here the focus is on the case of parametric uncertainty. An elementary and unified derivation of the celebrated theorem of Kharitonov and the Edge Theorem is presented. Next, an algorithmic approach for robustness analysis in the cases of multilinear and polynomic parametric uncertainty (i.e., the closed loop characteristic polynomial depends multilinearly and polynomially respectively on the parameters) is given. The latter cases are most important from practical considerations. Some novel modifications in this algorithm which result in a procedure of polynomial time behavior in the number of uncertain parameters is outlined. Finally, it is shown how the more general problem of robustness analysis for combined parametric and dynamic (i.e., unmodeled dynamics) uncertainty can be reduced to the case of polynomic parametric uncertainty, and thus be solved by means of the algorithm.
Parametric instabilities in picosecond time scales
Baldis, H.A.; Rozmus, W.; Labaune, C.; Mounaix, Ph.; Pesme, D.; Baton, S.; Tikhonchuk, V.T.
1993-03-01
The coupling of intense laser light with plasmas is a rich field of plasma physics, with many applications. Among these are inertial confinement fusion (ICF), x-ray lasers, particle acceleration, and x-ray sources. Parametric instabilities have been studied for many years because of their importance to ICF; with laser pulses with duration of approximately a nanosecond, and laser intensities in the range 10{sup 14}--10{sup 15}W/cm{sup 2} these instabilities are of crucial concern because of a number of detrimental effects. Although the laser pulse duration of interest for these studies are relatively long, it has been evident in the past years that to reach an understanding of these instabilities requires their characterization and analysis in picosecond time scales. At the laser intensities of interest, the growth rate for stimulated Brillouin scattering (SBS) is of the order of picoseconds, and of an order of magnitude shorter for stimulated Raman scattering (SRS). In this paper the authors discuss SBS and SRS in the context of their evolution in picosecond time scales. They describe the fundamental concepts associated with their growth and saturation, and recent work on the nonlinear treatment required for the modeling of these instabilities at high laser intensities.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; Ostien, Jakob T.; Lai, Zhengshou
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, the performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.
Parametric phase diffusion analysis of irregular oscillations
NASA Astrophysics Data System (ADS)
Schwabedal, Justus T. C.
2014-09-01
Parametric phase diffusion analysis (ΦDA), a method to determine variability of irregular oscillations, is presented. ΦDA is formulated as an analysis technique for sequences of Poincaré return times found in numerous applications. The method is unbiased by the arbitrary choice of Poincaré section, i.e. isophase, which causes a spurious component in the Poincaré return times. Other return-time variability measures can be biased drastically by these spurious return times, as shown for the Fano factor of chaotic oscillations in the Rössler system. The empirical use of ΦDA is demonstrated in an application to heart rate data from the Fantasia Database, for which ΦDA parameters successfully classify heart rate variability into groups of age and gender.
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2017-07-01
The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-09-01
The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.
Action Quantization, Energy Quantization, and Time Parametrization
NASA Astrophysics Data System (ADS)
Floyd, Edward R.
2017-03-01
The additional information within a Hamilton-Jacobi representation of quantum mechanics is extra, in general, to the Schrödinger representation. This additional information specifies the microstate of ψ that is incorporated into the quantum reduced action, W. Non-physical solutions of the quantum stationary Hamilton-Jacobi equation for energies that are not Hamiltonian eigenvalues are examined to establish Lipschitz continuity of the quantum reduced action and conjugate momentum. Milne quantization renders the eigenvalue J. Eigenvalues J and E mutually imply each other. Jacobi's theorem generates a microstate-dependent time parametrization t-τ =partial _E W even where energy, E, and action variable, J, are quantized eigenvalues. Substantiating examples are examined in a Hamilton-Jacobi representation including the linear harmonic oscillator numerically and the square well in closed form. Two byproducts are developed. First, the monotonic behavior of W is shown to ease numerical and analytic computations. Second, a Hamilton-Jacobi representation, quantum trajectories, is shown to develop the standard energy quantization formulas of wave mechanics.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
A general framework for parametric survival analysis.
Crowther, Michael J; Lambert, Paul C
2014-12-30
Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.
Parametric analysis of atmospheric processes
NASA Astrophysics Data System (ADS)
Elshamy, M.
1993-04-01
The different phases of the space shuttle mission operations and systems analyses are influenced by several random perturbations due to the dynamics of atmospheric processes. From the mission planning point of view, there are few atmospheric conditions of interest, such as thunderstorm, precipitation, cloud ceiling, peak surface wind speed, etc. These atmospheric conditions, called parameters, are actually constraints on the mission operations. An atmospheric parameter is a random variable which attains either a permissible or not permissible value. As such, each of these atmospheric parameters is assigned the values of either 0 or 1, for GO or NOGO outcomes, respectively. These atmospheric parameters are inherently dependent random variables. An important part of mission planning is being able to provide, ahead of time, a good assessment of GO/NOGO status for different atmospheric parameters as well as conditional probabilities involving GO and/or NOGO outcomes. Specifically, it is of interest to effectively address certain questions pertaining to the assigned constraints the different mission phases of the space vehicle. The questions of interest involve: (1) the probability that the assigned atmospheric constraints will (or will not) occur during a particular time; (2) the probability that the assigned atmospheric constraints will (or will not) occur for N consecutive days, at a particular time of the day; (3) given that the assigned constraints have occurred (or have not occurred) for n consecutive days, at a particular time of the day, what is the probability that the constraints will continue for N additional days?; (4) the probabilities of runs of GO and NOGO outcomes; and (5) estimating certain conditional probabilities involving GO and/or NOGO outcomes. Effectively addressing and giving specific answers to these types of questions are useful in many ways: (1) to determine design criteria for the space vehicle; (2) to establish flight operational rules; and
Parametric resonance in the early Universe—a fitting analysis
NASA Astrophysics Data System (ADS)
Figueroa, Daniel G.; Torrentí, Francisco
2017-02-01
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.
Time reversal of parametrical driving and the stability of the parametrically excited pendulum
NASA Astrophysics Data System (ADS)
Stannarius, Ralf
2009-02-01
It is well known that the periodic driving of a parametrically excited pendulum can stabilize or destabilize its stationary states, depending upon the frequency, wave form, and amplitude of the parameter modulations. We discuss the effect of time reversal of the periodic driving function for the parametric pendulum at small elongations. Such a time reversal usually leads to different solutions of the equations of motion and to different stability properties of the system. Two interesting exceptions are discussed, and two conditions are formulated for which the character of the solutions of the system is not influenced by a time reversal of the driving function, even though the trajectories of the dynamic variables are different.
Semi-parametric estimation in failure time mixture models.
Taylor, J M
1995-09-01
A mixture model is an attractive approach for analyzing failure time data in which there are thought to be two groups of subjects, those who could eventually develop the endpoint and those who could not develop the endpoint. The proposed model is a semi-parametric generalization of the mixture model of Farewell (1982). A logistic regression model is proposed for the incidence part of the model, and a Kaplan-Meier type approach is used to estimate the latency part of the model. The estimator arises naturally out of the EM algorithm approach for fitting failure time mixture models as described by Larson and Dinse (1985). The procedure is applied to some experimental data from radiation biology and is evaluated in a Monte Carlo simulation study. The simulation study suggests the semi-parametric procedure is almost as efficient as the correct fully parametric procedure for estimating the regression coefficient in the incidence, but less efficient for estimating the latency distribution.
Large-scale parametric survival analysis.
Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S
2013-10-15
Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.
mu analysis with real parametric uncertainty
NASA Technical Reports Server (NTRS)
Young, Peter M.; Newlin, Matthew P.; Doyle, John C.
1991-01-01
The authors give a broad overview, from a LFT (linear fractional transformation)/mu perspective, of some of the theoretical and practical issues associated with robustness in the presence of real parametric uncertainty, with a focus on computation. Recent results on the properties of mu in the mixed case are reviewed, including issues of NP completeness, continuity, computation of bounds, the equivalence of mu and its bounds, and some direct comparisons with Kharitonov-type analysis methods. In addition, some advances in the computational aspects of the problem, including a novel branch and bound algorithm, are briefly presented together with numerical results. The results suggest that while the mixed mu problem may have inherently combinatoric worst-case behavior, practical algorithms with modest computational requirements can be developed for problems of medium size (less than 100 parameters) that are of engineering interest.
Parametric systems analysis for tandem mirror hybrids
Lee, J.D.; Chapin, D.L.; Chi, J.W.H.
1980-09-01
Fusion fission systems, consisting of fissile producing fusion hybrids combining a tandem mirror fusion driver with various blanket types and net fissile consuming LWR's, have been modeled and analyzed parametrically. Analysis to date indicates that hybrids can be competitive with mined uranium when U/sub 3/O/sub 8/ cost is about 100 $/lb., adding less than 25% to present day cost of power from LWR's. Of the three blanket types considered, uranium fast fission (UFF), thorium fast fission (ThFF), and thorium fission supressed (ThFS), the ThFS blanket has a modest economic advantage under most conditions but has higher support ratios and potential safety advantages under all conditions.
Parametric time delay modeling for floating point units
NASA Astrophysics Data System (ADS)
Fahmy, Hossam A. H.; Liddicoat, Albert A.; Flynn, Michael J.
2002-12-01
A parametric time delay model to compare floating point unit implementations is proposed. This model is used to compare a previously proposed floating point adder using a redundant number representation with other high-performance implementations. The operand width, the fan-in of the logic gates and the radix of the redundant format are used as parameters to the model. The comparison is done over a range of operand widths, fan-in and radices to show the merits of each implementation.
Model reduction for parametric instability analysis in shells conveying fluid
NASA Astrophysics Data System (ADS)
Kochupillai, Jayaraj; Ganesan, N.; Padmanabhan, Chandramouli
2003-05-01
Flexible pipes conveying fluid are often subjected to parametric excitation due to time-periodic flow fluctuations. Such systems are known to exhibit complex instability phenomena such as divergence and coupled-mode flutter. Investigators have typically used weighted residual techniques, to reduce the continuous system model into a discrete model, based on approximation functions with global support, for carrying out stability analysis. While this approach is useful for straight pipes, modelling based on FEM is needed for the study of complicated piping systems, where the approximation functions used are local in support. However, the size of the problem is now significantly larger and for computationally efficient stability analysis, model reduction is necessary. In this paper, model reduction techniques are developed for the analysis of parametric instability in flexible pipes conveying fluids under a mean pressure. It is shown that only those linear transformations which leave the original eigenvalues of the linear time invariant system unchanged are admissible. The numerical technique developed by Friedmann and Hammond (Int. J. Numer. Methods Eng. Efficient 11 (1997) 1117) is used for the stability analysis. One of the key research issues is to establish criteria for deciding the basis vectors essential for an accurate stability analysis. This paper examines this issue in detail and proposes new guidelines for their selection.
Trend Analysis of Golestan's Rivers Discharges Using Parametric and Non-parametric Methods
NASA Astrophysics Data System (ADS)
Mosaedi, Abolfazl; Kouhestani, Nasrin
2010-05-01
One of the major problems in human life is climate changes and its problems. Climate changes will cause changes in rivers discharges. The aim of this research is to investigate the trend analysis of seasonal and yearly rivers discharges of Golestan province (Iran). In this research four trend analysis method including, conjunction point, linear regression, Wald-Wolfowitz and Mann-Kendall, for analyzing of river discharges in seasonal and annual periods in significant level of 95% and 99% were applied. First, daily discharge data of 12 hydrometrics stations with a length of 42 years (1965-2007) were selected, after some common statistical tests such as, homogeneity test (by applying G-B and M-W tests), the four mentioned trends analysis tests were applied. Results show that in all stations, for summer data time series, there are decreasing trends with a significant level of 99% according to Mann-Kendall (M-K) test. For autumn time series data, all four methods have similar results. For other periods, the results of these four tests were more or less similar together. While, for some stations the results of tests were different. Keywords: Trend Analysis, Discharge, Non-parametric methods, Wald-Wolfowitz, The Mann-Kendall test, Golestan Province.
Lottery spending: a non-parametric analysis.
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.
Lottery Spending: A Non-Parametric Analysis
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699
Parametric analysis of open plan offices
NASA Astrophysics Data System (ADS)
Nogueira, Flavia F.; Viveiros, Elvira B.
2002-11-01
The workspace has been undergoing many changes. Open plan offices are being favored instead of ones of traditional design. In such offices, workstations are separated by partial height barriers, which allow a certain degree of visual privacy and some sound insulation. The challenge in these offices is to provide acoustic privacy for the workstations. Computer simulation was used as a tool for this investigation. Two simple models were generated and their results compared to experimental data measured in two real offices. After validating the approach, models with increasing complexity were generated. Lastly, an ideal office with 64 workstations was created and a parametric survey performed. Nine design parameters were taken as variables and the results are discussed in terms of sound pressure level, in octave bands, and intelligibility index.
Quantitative analysis and parametric display of regional myocardial mechanics
NASA Astrophysics Data System (ADS)
Eusemann, Christian D.; Bellemann, Matthias E.; Robb, Richard A.
2000-04-01
Quantitative assessment of regional heart motion has significant potential for more accurate diagnosis of heart disease and/or cardiac irregularities. Local heart motion may be studied from medical imaging sequences. Using functional parametric mapping, regional myocardial motion during a cardiac cycle can be color mapped onto a deformable heart model to obtain better understanding of the structure- to-function relationships in the myocardium, including regional patterns of akinesis or diskinesis associated with ischemia or infarction. In this study, 3D reconstructions were obtained from the Dynamic Spatial Reconstructor at 15 time points throughout one cardiac cycle of pre-infarct and post-infarct hearts. Deformable models were created from the 3D images for each time point of the cardiac cycles. Form these polygonal models, regional excursions and velocities of each vertex representing a unit of myocardium were calculated for successive time-intervals. The calculated results were visualized through model animations and/or specially formatted static images. The time point of regional maximum velocity and excursion of myocardium through the cardiac cycle was displayed using color mapping. The absolute value of regional maximum velocity and maximum excursion were displayed in a similar manner. Using animations, the local myocardial velocity changes were visualized as color changes on the cardiac surface during the cardiac cycle. Moreover, the magnitude and direction of motion for individual segments of myocardium could be displayed. Comparison of these dynamic parametric displays suggest that the ability to encode quantitative functional information on dynamic cardiac anatomy enhances the diagnostic value of 4D images of the heart. Myocardial mechanics quantified this way adds a new dimension to the analysis of cardiac functional disease, including regional patterns of akinesis and diskinesis associated with ischemia and infarction. Similarly, disturbances in
A parametric estimation procedure for relapse time distributions.
Ahlström, L; Olsson, M; Nerman, O
1999-06-01
In a relapse clinical trial patients who have recovered from some recurrent disease (e.g., ulcer or cancer) are examined at a number of predetermined times. A relapse can be detected either at one of these planned inspections or at a spontaneous visit initiated by the patient because of symptoms. In the first case the observations of the time to relapse, X, is interval-censored by two predetermined time-points. In the second case the upper endpoint of the interval is an observation of the time to symptoms, Y. To model the progression of the disease we use a partially observable Markov process. This approach results in a bivariate phase-type distribution for the joint distribution of (X, Y). It is a flexible model which contains several natural distributions for X, and allows the conditional distributions of the marginals to smoothly depend on each other. To estimate the distributions involved we develop an EM-algorithm. The estimation procedure is evaluated and compared with a non-parametric method in a couple of example based on simulated data.
Soil Analysis using the semi-parametric NAA technique
Zamboni, C. B.; Silveira, M. A. G.; Medina, N. H.
2007-10-26
The semi-parametric Neutron Activation Analysis technique, using Au as a flux monitor, was applied to measure element concentrations of Br, Ca, Cl, K, Mn and Na for soil characterization. The results were compared with those using the Instrumental Neutron Activation Analysis technique and they found to be compatible. The viability, advantages, and limitations of using these two analytic methodologies are discussed.
Non-parametric estimation of gap time survival functions for ordered multivariate failure time data.
Schaubel, Douglas E; Cai, Jianwen
2004-06-30
Times between sequentially ordered events (gap times) are often of interest in biomedical studies. For example, in a cancer study, the gap times from incidence-to-remission and remission-to-recurrence may be examined. Such data are usually subject to right censoring, and within-subject failure times are generally not independent. Statistical challenges in the analysis of the second and subsequent gap times include induced dependent censoring and non-identifiability of the marginal distributions. We propose a non-parametric method for constructing one-sample estimators of conditional gap-time specific survival functions. The estimators are uniformly consistent and, upon standardization, converge weakly to a zero-mean Gaussian process, with a covariance function which can be consistently estimated. Simulation studies reveal that the asymptotic approximations are appropriate for finite samples. Methods for confidence bands are provided. The proposed methods are illustrated on a renal failure data set, where the probabilities of transplant wait-listing and kidney transplantation are of interest.
Parametric analysis of a magnetized cylindrical plasma
Ahedo, Eduardo
2009-11-15
The relevant macroscopic model, the spatial structure, and the parametric regimes of a low-pressure plasma confined by a cylinder and an axial magnetic field is discussed for the small-Debye length limit, making use of asymptotic techniques. The plasma response is fully characterized by three-dimensionless parameters, related to the electron gyroradius, and the electron and ion collision mean-free-paths. There are the unmagnetized regime, the main magnetized regime, and, for a low electron-collisionality plasma, an intermediate-magnetization regime. In the magnetized regimes, electron azimuthal inertia is shown to be a dominant phenomenon in part of the quasineutral plasma region and to set up before ion radial inertia. In the main magnetized regime, the plasma structure consists of a bulk diffusive region, a thin layer governed by electron inertia, a thinner sublayer controlled by ion inertia, and the non-neutral Debye sheath. The solution of the main inertial layer yields that the electron azimuthal energy near the wall is larger than the electron thermal energy, making electron resistivity effects non-negligible. The electron Boltzmann relation is satisfied only in the very vicinity of the Debye sheath edge. Ion collisionality effects are irrelevant in the magnetized regime. Simple scaling laws for plasma production and particle and energy fluxes to the wall are derived.
Bayesian parametric estimation of stop-signal reaction time distributions.
Matzke, Dora; Dolan, Conor V; Logan, Gordon D; Brown, Scott D; Wagenmakers, Eric-Jan
2013-11-01
The cognitive concept of response inhibition can be measured with the stop-signal paradigm. In this paradigm, participants perform a 2-choice response time (RT) task where, on some of the trials, the primary task is interrupted by a stop signal that prompts participants to withhold their response. The dependent variable of interest is the latency of the unobservable stop response (stop-signal reaction time, or SSRT). Based on the horse race model (Logan & Cowan, 1984), several methods have been developed to estimate SSRTs. None of these approaches allow for the accurate estimation of the entire distribution of SSRTs. Here we introduce a Bayesian parametric approach that addresses this limitation. Our method is based on the assumptions of the horse race model and rests on the concept of censored distributions. We treat response inhibition as a censoring mechanism, where the distribution of RTs on the primary task (go RTs) is censored by the distribution of SSRTs. The method assumes that go RTs and SSRTs are ex-Gaussian distributed and uses Markov chain Monte Carlo sampling to obtain posterior distributions for the model parameters. The method can be applied to individual as well as hierarchical data structures. We present the results of a number of parameter recovery and robustness studies and apply our approach to published data from a stop-signal experiment.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Application of Parametric Models to a Survival Analysis of Hemodialysis Patients
Montaseri, Maryam; Charati, Jamshid Yazdani; Espahbodi, Fateme
2016-01-01
Background Hemodialysis is the most common renal replacement therapy in patients with end stage renal disease (ESRD). Objectives The present study compared the performance of various parametric models in a survival analysis of hemodialysis patients. Methods This study consisted of 270 hemodialysis patients who were referred to Imam Khomeini and Fatima Zahra hospitals between November 2007 and November 2012. The Akaike information criterion (AIC) and residuals review were used to compare the performance of the parametric models. The computations were done using STATA Software, with significance accepted at a level of 0.05. Results The results of a multivariate analysis of the variables in the parametric models showed that the mean serum albumin and the clinic attended were the most important predictors in the survival of the hemodialysis patients (P < 0.05). Among the parametric models tested, the results indicated that the performance of the Weibull model was the highest. Conclusions Parametric models may provide complementary data for clinicians and researchers about how risks vary over time. The Weibull model seemed to show the best fit among the parametric models of the survival of hemodialysis patients. PMID:27896235
Fu, Chuanyun; Zhang, Yaping; Bie, Yiming; Hu, Liwei
2016-10-01
Countdown timers display the time left on the current signal, which makes drivers be more ready to react to the phase change. However, previous related studies have rarely explored the effects of countdown timer on driver's brake perception-reaction time (BPRT) to yellow light. The goal of this study was therefore to characterize and model driver's BPRT to yellow signal at signalized intersections with and without countdown timer. BPRT data for "first-to-stop" vehicles after yellow onset within the transitional zone were collected through on-site observation at six signalized intersections in Harbin, China. Statistical analysis showed that the observed 15th, 50th, and 85th percentile BPRTs without countdown timer were 0.52, 0.84, and 1.26s, respectively. The observed 15th, 50th, and 85th percentile BPRTs with countdown timer were 0.32, 1.20, and 2.52s, respectively. Log-logistic distribution appeared to best fit the BPRT without countdown timer, while Weibull distribution seemed to best fit the BPRT with countdown timer. After that, a Log-logistic accelerated failure time (AFT) duration model was developed to model driver's BPRT without countdown timer, whereas a Weibull AFT duration model was established to model driver's BPRT with countdown timer. Three significant factors affecting the BPRT identified in both AFT models included yellow-onset distance from the stop line, yellow-onset approach speed, and deceleration rate. No matter whether the presence of countdown timer or not, BPRT increased as yellow-onset distance to the stop line or deceleration rate increased, but decreased as yellow-onset speed increased. The impairment of driver's BPRT due to countdown timer appeared to increase with yellow-onset distance to the stop line or deceleration rate, but decrease with yellow-onset speed. An increase in driver's BPRT because of countdown timer may induce risky driving behaviors (i.e., stop abruptly, or even violate traffic signal), revealing a weakness of
Estimation of the blood Doppler frequency shift by a time-varying parametric approach.
Girault, J M; Kouamé, D; Ouahabi, A; Patat, F
2000-03-01
Doppler ultrasound is widely used in medical applications to extract the blood Doppler flow velocity in the arteries via spectral analysis. The spectral analysis of non-stationary signals and particularly Doppler signals requires adequate tools that should present both good time and frequency resolutions. It is well-known that the most commonly used time-windowed Fourier transform, which provides a time-frequency representation, is limited by the intrinsic trade-off between time and frequency resolutions. Parametric methods have then been introduced as an alternative to overcome this resolution problem. However, the performance of those methods deteriorates when high non-stationarities are present in the Doppler signal. For the purpose of accurately estimating the Doppler frequency shift, even when the temporal flow velocity is rapid (high non-stationarity), we propose to combine the use of the time-varying autoregressive (AR) method and the (dominant) pole frequency. This proposed method performs well in the context where non-stationarities are very high. A comparative evaluation has been made between classical (FFT based) and AR (both block and recursive) algorithms. Among recursive algorithms we test an adaptive recursive method as well as a time-varying recursive method. Finally, the superiority of the time-varying parametric approach in terms of frequency tracking and delay in the frequency estimate is illustrated for both simulated and in vivo Doppler signals.
A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series
NASA Astrophysics Data System (ADS)
Rovira, F.; Palau, J. L.; Millán, M.
2009-09-01
Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original
Eldawud, Reem; Wagner, Alixandra; Dong, Chenbo; Rojansakul, Yon; Dinu, Cerasela Zoica
2016-01-01
Single-walled carbon nanotubes (SWCNTs) implementation in a variety of biomedical applications from bioimaging, to controlled drug delivery and cellular-directed alignment for muscle myofiber fabrication, has raised awareness of their potential toxicity. Nanotubes structural aspects which resemble asbestos, as well as their ability to induce cyto and genotoxicity upon interaction with biological systems by generating reactive oxygen species or inducing membrane damage, just to name a few, have led to focused efforts aimed to assess associated risks prior their user implementation. In this study, we employed a non-invasive and real-time electric cell impedance sensing (ECIS) platform to monitor behavior of lung epithelial cells upon exposure to a library of SWCNTs with user-defined physicochemical properties. Using the natural sensitivity of the cells, we evaluated SWCNT-induced cellular changes in relation to cell attachment, cell–cell interactions and cell viability respectively. Our methods have the potential to lead to the development of standardized assays for risk assessment of other nanomaterials as well as risk differentiation based on the nanomaterials surface chemistry, purity and agglomeration state. PMID:25913448
Parametric-time coherent states for the generalized MIC-Kepler system
Uenal, Nuri
2006-12-15
In this study, we construct the parametric-time coherent states for the negative energy states of the generalized MIC-Kepler system, in which a charged particle is in a monopole vector potential, a Coulomb potential, and a Bohm-Aharonov potantial. We transform the system into four isotropic harmonic oscillators and construct the parametric-time coherent states for these oscillators. Finally, we compactify these states into the physical time coherent states for the generalized MIC-Kepler system.
Early Life Cycle Cost Trade Study By Parametric Analysis
NASA Astrophysics Data System (ADS)
Dehm, Roy; Patrakis, Stan
1982-06-01
Unit production cost and life cycle cost tradestudy considerations are basic to the affordability of a new product. A major portion of the life cycle cost of a product, including production cost, are found to result from decisions made early in the planning phases of a program. Computerized parametric cost modeling generates cost estimates using the information that is available before the developing of engineering detail. The RCA PRICE program, available to all potential users, is used to illustrate the input requirements and steps necessary for parametric estimating of costs for development, production and support in the life cycle of a product. A laser rangefinder equipment is used as a product example to show the utility of this analysis.
From wavelets to adaptive approximations: time-frequency parametrization of EEG
Durka, Piotr J
2003-01-01
This paper presents a summary of time-frequency analysis of the electrical activity of the brain (EEG). It covers in details two major steps: introduction of wavelets and adaptive approximations. Presented studies include time-frequency solutions to several standard research and clinical problems, encountered in analysis of evoked potentials, sleep EEG, epileptic activities, ERD/ERS and pharmaco-EEG. Based upon these results we conclude that the matching pursuit algorithm provides a unified parametrization of EEG, applicable in a variety of experimental and clinical setups. This conclusion is followed by a brief discussion of the current state of the mathematical and algorithmical aspects of adaptive time-frequency approximations of signals. PMID:12605721
Sun, Qibing; Liu, Hongjun; Huang, Nan; Long, Hanbo; Wen, Jin; Zhu, Shaolan; Zhao, Wei
2010-02-01
Numerical simulation and analysis about the influence of the time modulation of the pump laser caused by mode beating on optical parametric process are presented with OPA and OPG as examples. It is shown that the output power of the generated beams from optical parametric process is modulated in the time domain and exhibits large power fluctuations, when a Q-switched laser oscillating on several random longitudinal modes is used as the pump laser. Irregular spike sequences of the generated beams are observed. We also find that the output power of the light from optical parametric process becomes more stable and exhibits a less fluctuation, when the number of the longitudinal modes (n) increases.
160-Gb/s optical time division multiplexing and multicasting in parametric amplifiers.
Brès, Camille-Sophie; Wiberg, Andreas O J; Coles, James; Radic, Stojan
2008-10-13
We report the generation of an optical time division multiplexed single data channel at 160 Gb/s using a one-pump fiber-optic parametric amplifier, and its subsequent multicasting. A two-pump fiber optic parametric amplifier was used to perform all-optical multicasting of 160 Gb/s channel to four data streams. New processing scheme combined the increase in signal extinction ratio and low-impairment multicasting using continuous-wave parametric pumps. Selective conjugation of 160 Gb/s was demonstrated for the first time.
Saarela, Olli; Liu, Zhihui Amy
2016-10-15
Marginal structural Cox models are used for quantifying marginal treatment effects on outcome event hazard function. Such models are estimated using inverse probability of treatment and censoring (IPTC) weighting, which properly accounts for the impact of time-dependent confounders, avoiding conditioning on factors on the causal pathway. To estimate the IPTC weights, the treatment assignment mechanism is conventionally modeled in discrete time. While this is natural in situations where treatment information is recorded at scheduled follow-up visits, in other contexts, the events specifying the treatment history can be modeled in continuous time using the tools of event history analysis. This is particularly the case for treatment procedures, such as surgeries. In this paper, we propose a novel approach for flexible parametric estimation of continuous-time IPTC weights and illustrate it in assessing the relationship between metastasectomy and mortality in metastatic renal cell carcinoma patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
NASA Astrophysics Data System (ADS)
Scaramuzzino, F.
2009-09-01
This paper considers a qualitative analysis of the solution of a pure exchange general economic equilibrium problem according to two independent parameters. Some recently results obtained by the author in the static and the dynamic case have been collected. Such results have been applied in a particular parametric case: it has been focused the attention on a numerical application for which the existence of the solution of time-depending parametric variational inequality that describes the equilibrium conditions has been proved by means of the direct method. By using MatLab computation after a linear interpolation, the curves of equilibrium have been visualized.
NASA Astrophysics Data System (ADS)
Spiridonakos, M. D.; Fassois, S. D.
2009-08-01
The problem of parametric output-only identification of a time-varying structure based on vector random vibration signal measurements is considered. A functional series vector time-dependent autoregressive moving average (FS-VTARMA) method is introduced and employed for the identification of a "bridge-like" laboratory structure consisting of a beam and a moving mass. The identification is based on three simultaneously measured vibration response signals obtained during a single experiment. The method is judged against baseline modelling based on multiple "frozen-configuration" stationary experiments, and is shown to be effective and capable of accurately tracking the dynamics. Additional comparisons with a recursive pseudo-linear regression VTARMA (PLR-VTARMA) method and a short time canonical variate analysis (ST-CVA) subspace method are made and demonstrate the method's superior achievable accuracy and model parsimony.
Fully parametric imaging with reversible tracer (18)F-FLT within a reasonable time.
Kudomi, Nobuyuki; Maeda, Yukito; Hatakeyama, Tetsuhiro; Yamamoto, Yuka; Nishiyama, Yoshihiro
2017-03-01
PET enables quantitative imaging of the rate constants K 1, k 2, k 3, and k 4, with a reversible two tissue compartment model (2TCM). A new method is proposed for computing all of these rates within a reasonable time, less than 1 min. A set of differential equations for the reversible 2TCM was converted into a single formula consisting of differential and convolution terms. The validity was tested on clinical data with (18)F-FLT PET for patients with glioma (n = 39). Parametric images were generated with the formula that was developed. Parametric values were extracted from regions of interest (ROIs) for glioma from the images generated, and they were compared with those obtained with the non-linear fitting method. We performed simulation studies for testing accuracy by generating simulated images, assuming clinically expected ranges of the parametric values. The computation time was about 20 s, and the quality of the images generated was acceptable. The values obtained for K 1 for grade IV tumor were 0.24 ± 0.23 and 0.26 ± 0.25 ml(-1) min(-1) g(-1) for the image-based and ROI-based methods, respectively. The values were 0.21 ± 0.12 and 0.21 ± 0.12 min(-1) for k 2, 0.13 ± 0.07 and 0.13 ± 0.07 min(-1) for k 3, and 0.052 ± 0.020 and 0.054 ± 0.021 min(-1) for k 4. The differences between the methods were not significant. Regression analysis showed correlations of r = 0.94, 0.86, 0.71, and 0.52 for these parameters. Simulation demonstrated that the accuracy was within acceptable ranges, namely, the correlations were r = 0.99, r = 0.97, r = 0.99, and r = 0.91 for K 1, k 2, k 3, and k 4, respectively, between estimated and assumed values. This results suggest that parametric images can be obtained fully within reasonable time, accuracy, and quality.
Parametric analysis of the reliability of igniter systems - PARIS
NASA Astrophysics Data System (ADS)
Leeuw, M. W.; Bal, E. A.; Prinse, W. C.
1992-06-01
A fully automated improved thermal transient test set is used to measure the thermoelectrical response of a number of different fuze heads. The fitted wire model is used to describe the heat dynamics in the fuze head and to calculate a number of intrinsic thermal properties. These properties are used as input for a parametric analysis of the reliability of igniter systems (PARIS). Comparison of PARIS with firing levels obtained with the classical Robbins-Monro method has been used to validate this new way to estimate the sensitivity of fuze heads.
Jung, S H; Su, J Q
1995-02-15
We propose a non-parametric method to calculate a confidence interval for the difference or ratio of two median failure times for paired observations with censoring. The new method is simple to calculate, does not involve non-parametric density estimates, and is valid asymptotically even when the two underlying distribution functions differ in shape. The method also allows missing observations. We report numerical studies to examine the performance of the new method for practical sample sizes.
Deriving the Coronal Magnetic Field Using Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Rose, M. Franklin (Technical Monitor)
2001-01-01
When plasma-beta greater than 1 then the gas pressure dominates over the magnetic pressure. This ratio as a function along the coronal magnetic field lines varies from beta greater than 1 in the photosphere at the base of the field lines, to beta much less than 1 in the mid-corona, to beta greater than 1 in the upper corona. Almost all magnetic field extrapolations do not or cannot take into account the full range of beta. They essentially assume beta much less than 1, since the full boundary conditions do not exist in the beta greater than 1 regions. We use a basic parametric representation of the magnetic field lines such that the field lines can be manipulated to match linear features in the EUV and SXR coronal images in a least squares sense. This research employs free-form deformation mathematics to generate the associated coronal magnetic field. In our research program, the complex magnetic field topology uses Parametric Transformation Analysis (PTA) which is a new and innovative method to describe the coronal fields that we are developing. In this technique the field lines can be viewed as being embedded in a plastic medium, the frozen-in-field-line concept. As the medium is deformed the field lines are similarly deformed. However the advantage of the PTA method is that the field line movement represents a transformation of one magnetic field solution into another magnetic field solution. When fully implemented, this method will allow the resulting magnetic field solution to fully match the magnetic field lines with EUV/SXR coronal loops by minimizing the differences in direction and dispersion of a collection of PTA magnetic field lines and observed field lines. The derived magnetic field will then allow beta greater than 1 regions to be included, the electric currents to be calculated, and the Lorentz force to be determined. The advantage of this technique is that the solution is: (1) independent of the upper and side boundary conditions, (2) allows non
Deriving the Coronal Magnetic Field Using Parametric Transformation Analysis
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Rose, M. Franklin (Technical Monitor)
2001-01-01
When plasma-beta greater than 1 then the gas pressure dominates over the magnetic pressure. This ratio as a function along the coronal magnetic field lines varies from beta greater than 1 in the photosphere at the base of the field lines, to beta much less than 1 in the mid-corona, to beta greater than 1 in the upper corona. Almost all magnetic field extrapolations do not or cannot take into account the full range of beta. They essentially assume beta much less than 1, since the full boundary conditions do not exist in the beta greater than 1 regions. We use a basic parametric representation of the magnetic field lines such that the field lines can be manipulated to match linear features in the EUV and SXR coronal images in a least squares sense. This research employs free-form deformation mathematics to generate the associated coronal magnetic field. In our research program, the complex magnetic field topology uses Parametric Transformation Analysis (PTA) which is a new and innovative method to describe the coronal fields that we are developing. In this technique the field lines can be viewed as being embedded in a plastic medium, the frozen-in-field-line concept. As the medium is deformed the field lines are similarly deformed. However the advantage of the PTA method is that the field line movement represents a transformation of one magnetic field solution into another magnetic field solution. When fully implemented, this method will allow the resulting magnetic field solution to fully match the magnetic field lines with EUV/SXR coronal loops by minimizing the differences in direction and dispersion of a collection of PTA magnetic field lines and observed field lines. The derived magnetic field will then allow beta greater than 1 regions to be included, the electric currents to be calculated, and the Lorentz force to be determined. The advantage of this technique is that the solution is: (1) independent of the upper and side boundary conditions, (2) allows non
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
Validating Timed Models of Deployment Components with Parametric Concurrency
NASA Astrophysics Data System (ADS)
Broch Johnsen, Einar; Owe, Olaf; Schlatte, Rudolf; Tapia Tarifa, Silvia Lizeth
Many software systems today are designed without assuming a fixed underlying architecture, and may be adapted for sequential, multicore, or distributed deployment. Examples of such systems are found in, e.g., software product lines, service-oriented computing, information systems, embedded systems, operating systems, and telephony. Models of such systems need to capture and range over relevant deployment scenarios, so it is interesting to lift aspects of low-level deployment concerns to the abstraction level of the modeling language. This paper proposes an abstract model of deployment components for concurrent objects, extending the Creol modeling language. The deployment components are parametric in the amount of concurrency they provide; i.e., they vary in processing resources. We give a formal semantics of deployment components and characterize equivalence between deployment components which differ in concurrent resources in terms of test suites. Our semantics is executable on Maude, which allows simulations and test suites to be applied to a deployment component with different concurrent resources.
Parametric and nonparametric linkage analysis: A unified multipoint approach
Kruglyak, L.; Daly, M.J.; Reeve-Daly, M.P.; Lander, E.S.
1996-06-01
In complex disease studies, it is crucial to perform multipoint linkage analysis with many markers and to use robust nonparametric methods that take account of all pedigree information. Currently available methods fall short in both regards. In this paper, we describe how to extract complete multipoint inheritance information from general pedigrees of moderate size. This information is captured in the multipoint inheritance distribution, which provides a framework for a unified approach to both parametric and nonparametric methods of linkage analysis. Specifically, the approach includes the following: (1) Rapid exact computation of multipoint LOD scores involving dozens of highly polymorphic markers, even in the presence of loops and missing data. (2) Nonparametric linkage (NPL) analysis, a powerful new approach to pedigree analysis. We show that NPL is robust to uncertainty about mode of inheritance, is much more powerful than commonly used nonparametric methods, and loses little power relative to parametric linkage analysis. NPL thus appears to be the method of choice for pedigree studies of complex traits. (3) Information-content mapping, which measures the fraction of the total inheritance information extracted by the available marker data and points out the regions in which typing additional markers is most useful. (4) Maximum-likelihood reconstruction of many-marker haplotypes, even in pedigrees with missing data. We have implemented NPL analysis, LOD-score computation, information-content mapping, and haplotype reconstruction in a new computer package, GENEHUNTER. The package allows efficient multipoint analysis of pedigree data to be performed rapidly in a single user-friendly environment. 34 refs., 9 figs., 2 tabs.
Robinson, Mark A; Vanrenterghem, Jos; Pataky, Todd C
2015-02-01
Multi-muscle EMG time-series are highly correlated and time dependent yet traditional statistical analysis of scalars from an EMG time-series fails to account for such dependencies. This paper promotes the use of SPM vector-field analysis for the generalised analysis of EMG time-series. We reanalysed a publicly available dataset of Young versus Adult EMG gait data to contrast scalar and SPM vector-field analysis. Independent scalar analyses of EMG data between 35% and 45% stance phase showed no statistical differences between the Young and Adult groups. SPM vector-field analysis did however identify statistical differences within this time period. As scalar analysis failed to consider the multi-muscle and time dependence of the EMG time-series it exhibited Type II error. SPM vector-field analysis on the other hand accounts for both dependencies whilst tightly controlling for Type I and Type II error making it highly applicable to EMG data analysis. Additionally SPM vector-field analysis is generalizable to linear and non-linear parametric and non-parametric statistical models, allowing its use under constraints that are common to electromyography and kinesiology.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Parametrization effects in the analysis of AMI Sunyaev-Zel'dovich observations
NASA Astrophysics Data System (ADS)
AMI Consortium; Olamaie, Malak; Rodríguez-Gonzálvez, Carmen; Davies, Matthew L.; Feroz, Farhan; Franzen, Thomas M. O.; Grainge, Keith J. B.; Hobson, Michael P.; Hurley-Walker, Natasha; Lasenby, Anthony N.; Pooley, Guy G.; Saunders, Richard D. E.; Scaife, Anna M. M.; Schammel, Michel; Scott, Paul F.; Shimwell, Timothy W.; Titterington, David J.; Waldram, Elizabeth M.; Zwart, Jonathan T. L.
2012-04-01
parametrization III results in unbiased estimates of the cluster properties (MT(r200) = (4.68 ± 1.56) × 1014 M⊙ and Tg(r200) = (4.3 ± 0.9) keV). We generate a second simulated cluster using a generalized Navarro-Frenk-White pressure profile and analyse it with an entropy-based model to take into account the temperature gradient in our analysis and improve the cluster gas density distribution. This model also constrains the cluster physical parameters and the results show a radial decline in the gas temperature as expected. The mean cluster total mass estimates are also within 1σ from the simulated cluster true values: MT(r200) = (5.9 ± 3.4) × 1014 M⊙ and Tg(r200) = (7.4 ± 2.6) keV using parametrization II, and MT(r200) = (8.0 ± 5.6) × 1014 M⊙ and Tg(r200) = (5.98 ± 2.43) keV using parametrization III. However, we find that for at least interferometric SZ analysis in practice at the present time, there is no differences in the Arcminute Microkelvin Imager (AMI) visibilities between the two models. This may of course change as the instruments improve.
Parametric sensitivity analysis for temperature control in outdoor photobioreactors.
Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando
2013-09-01
In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems.
Desiccant Enhanced Evaporative Air Conditioning: Parametric Analysis and Design; Preprint
Woods, J.; Kozubal, E.
2012-10-01
This paper presents a parametric analysis using a numerical model of a new concept in desiccant and evaporative air conditioning. The concept consists of two stages: a liquid desiccant dehumidifier and a dew-point evaporative cooler. Each stage consists of stacked air channel pairs separated by a plastic sheet. In the first stage, a liquid desiccant film removes moisture from the process (supply-side) air through a membrane. An evaporatively-cooled exhaust airstream on the other side of the plastic sheet cools the desiccant. The second-stage indirect evaporative cooler sensibly cools the dried process air. We analyze the tradeoff between device size and energy efficiency. This tradeoff depends strongly on process air channel thicknesses, the ratio of first-stage to second-stage area, and the second-stage exhaust air flow rate. A sensitivity analysis reiterates the importance of the process air boundary layers and suggests a need for increasing airside heat and mass transfer enhancements.
The parametric G-formula for time-to-event data: towards intuition with a worked example
Keil, Alexander P.; Edwards, Jessie K.; Richardson, David R.; Naimi, Ashley I.; Cole, Stephen R.
2015-01-01
Background The parametric g-formula can be used to estimate the effect of a policy, intervention, or treatment. Unlike standard regression approaches, the parametric g-formula can be used to adjust for time-varying confounders that are affected by prior exposures. To date, there are few published examples in which the method has been applied. Methods We provide a simple introduction to the parametric g-formula and illustrate its application in analysis of a small cohort study of bone marrow transplant patients in which the effect of treatment on mortality is subject to time-varying confounding. Results Standard regression adjustment yields a biased estimate of the effect of treatment on mortality relative to the estimate obtained by the g-formula. Conclusions The g-formula allows estimation of a relevant parameter for public health officials: the change in the hazard of mortality under a hypothetical intervention, such as reduction of exposure to a harmful agent or introduction of a beneficial new treatment. We present a simple approach to implement the parametric g-formula that is sufficiently general to allow easy adaptation to many settings of public health relevance. PMID:25140837
A linear parametric approach for analysis of mouse respiratory impedance.
Hanifi, Arezoo; Goplen, Nicholas; Matin, Mohammad; Salters, Roger E; Alam, Rafeul
2012-06-01
Assessment of the lung mechanics is crucial in lung function studies. Commonly lung mechanics is achieved through measurement of the input impedance of the lung where the experimental data is ideal for the application of system identification techniques. This study proposes a new approach for investigating the severity of lung conditions and also evaluating the treatment progression. The proposed method is established based on linear parametric identification of lung input impedance in mice and is applied to normal and asthmatic models (including acute, tolerant and chronic asthma) as well as a pharmacological intervention model. Experimental findings confirm the effectiveness of the analysis technique applied here. We discuss the potential application of this method to analyses of human lung mechanics.
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Syndrome Surveillance Using Parametric Space-Time Clustering
KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.
2002-11-01
As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.
Analysis of survival in breast cancer patients by using different parametric models
NASA Astrophysics Data System (ADS)
Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti
2017-09-01
In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.
Lan, Ling; Datta, Somnath
2010-04-01
As a type of multivariate survival data, multistate models have a wide range of applications, notably in cancer and infectious disease progression studies. In this article, we revisit the problem of estimation of state occupation, entry and exit times in a multistate model where various estimators have been proposed in the past under a variety of parametric and non-parametric assumptions. We focus on two non-parametric approaches, one using a product limit formula as recently proposed in Datta and Sundaram(1) and a novel approach using a fractional risk set calculation followed by a subtraction formula to calculate the state occupation probability of a transient state. A numerical comparison between the two methods is presented using detailed simulation studies. We show that the new estimators have lower statistical errors of estimation of state occupation probabilities for the distant states. We illustrate the two methods using a pubertal development data set obtained from the NHANES III.(2).
Survival Analysis of Patients with Breast Cancer using Weibull Parametric Model.
Baghestani, Ahmad Reza; Moghaddam, Sahar Saeedi; Majd, Hamid Alavi; Akbari, Mohammad Esmaeil; Nafissi, Nahid; Gohari, Kimiya
2015-01-01
The Cox model is known as one of the most frequently-used methods for analyzing survival data. However, in some situations parametric methods may provide better estimates. In this study, a Weibull parametric model was employed to assess possible prognostic factors that may affect the survival of patients with breast cancer. We studied 438 patients with breast cancer who visited and were treated at the Cancer Research Center in Shahid Beheshti University of Medical Sciences during 1992 to 2012; the patients were followed up until October 2014. Patients or family members were contacted via telephone calls to confirm whether they were still alive. Clinical, pathological, and biological variables as potential prognostic factors were entered in univariate and multivariate analyses. The log-rank test and the Weibull parametric model with a forward approach, respectively, were used for univariate and multivariate analyses. All analyses were performed using STATA version 11. A P-value lower than 0.05 was defined as significant. On univariate analysis, age at diagnosis, level of education, type of surgery, lymph node status, tumor size, stage, histologic grade, estrogen receptor, progesterone receptor, and lymphovascular invasion had a statistically significant effect on survival time. On multivariate analysis, lymph node status, stage, histologic grade, and lymphovascular invasion were statistically significant. The one-year overall survival rate was 98%. Based on these data and using Weibull parametric model with a forward approach, we found out that patients with lymphovascular invasion were at 2.13 times greater risk of death due to breast cancer.
Interactive flutter analysis and parametric study for conceptual wing design
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1995-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed on MathCad (trademark) platform for Macintosh, with integrated documentation, graphics, database and symbolic mathematics. The analysis method was based on nondimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The plots were compiled in a Vaught Corporation report from a vast database of past experiments and wind tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended Wing Body concept, proposed by McDonnell Douglas Corporation. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Interactive flutter analysis and parametric study for conceptual wing design
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1995-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed on MathCad (trademark) platform for Macintosh, with integrated documentation, graphics, database and symbolic mathematics. The analysis method was based on nondimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The plots were compiled in a Vaught Corporation report from a vast database of past experiments and wind tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended Wing Body concept, proposed by McDonnell Douglas Corporation. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Chen, Liao; Duan, Yuhua; Zhou, Haidong; Zhou, Xi; Zhang, Chi; Zhang, Xinliang
2017-04-17
A real-time broadband radio frequency (RF) spectrum analyzer is proposed and experimentally demonstrated to rapidly measure the RF spectrum of broadband optical signal. Cross phase modulation in the highly-nonlinear fiber is used to convert the RF spectrum carried by the pump to the optical spectrum of the probe signal, then the optical spectrum is real-time analyzed with the parametric spectro-temporal analyzer (PASTA) technology. The system performances are investigated in detail, including bandwidth, resolution, frame rate, and dynamic range. It achieves large RF bandwidth of over 800 GHz, as well as 91-MHz frame rate without sacrificing the resolution. It is noted that 91-MHz frame rate is several orders of magnitude improvement over those previous reported all-optical RF spectrum analyzers. As a proof-of-concept demonstration, this real-time broadband RF spectrum analyzer successfully characterizes the ultra-short pulse trains with repetition rate of 160GHz, which is far beyond capability of the conventional electrical spectrum analyzer. It presents a new way to implement rapid and broadband RF spectrum measurement, and would be of great interests for some ultrafast scenarios, where the real-time RF spectrum analysis can be applied.
Parametrically adjustable intubation mannequin with real-time visual feedback.
Delson, Nathan; Sloan, Conan; McGee, Thomas; Kedarisetty, Suraj; Yim, Wen-Wai; Hastings, Randolph H
2012-06-01
Training for direct laryngoscopy relies heavily on practice with patients. The necessity for human practice might be supplanted to some extent by an intubation mannequin with accurate airway anatomy, a realistic "feel" during laryngoscopy, the capacity to model many patient configurations, and a means to provide feedback to trainees and instructors. The goals of this project were (1) to build and evaluate an airway simulator with realistic dimensions and haptic sensation that could undergo a range of adjustments in several features that affect laryngoscopy difficulty and (2) to develop a system for displaying information on laryngoscopy force and motion in real time. The prototype was an existing 2-dimensional (2D) airway model that closely approximated cephalometric measurements of head, neck, and airway anatomy from the dental and surgical literature. The 2D model was extended in a third dimension by adding layers along the coronal axis. An off-the-shelf airway model provided the tongue, pharynx, larynx, and trachea. Adjustability was built into the face, jaw, mouth, teeth, and spine components. A feedback system was constructed with a force- and motion-sensing laryngoscope and motion sensors incorporated in the mannequin head, jaw, and larynx. Anatomic accuracy was assessed by measuring model dimensions. Realism was evaluated by measuring laryngoscopy force and motion compared with laryngoscopy in patients. The extruded 2.5-dimensional model maintained a close conformity to the anatomic measurements present in the original 2D model. The model could be adjusted through multiple settings for face length, jaw length and tension, mouth opening, and dental condition. The laryngoscopy trajectory had a similar shape to laryngoscopy trajectories in patients, but force was greater, on the order of 50 N, compared with roughly 30 N in patients. The movement of the laryngoscope through the mannequin airway could be displayed in real time during the procedure, establishing a
NASA Astrophysics Data System (ADS)
Wei, Sha; Han, Qinkai; Peng, Zhike; Chu, Fulei
2016-05-01
Some system parameters in mechanical systems are always uncertain due to uncertainties in geometric and material properties, lubrication condition and wear. For a more reasonable estimation of dynamic analysis of the parametrically excited system, the effect of uncertain parameters should be taken into account. This paper presents a new non-probabilistic analysis method for solving the dynamic responses of parametrically excited systems under uncertainties and multi-frequency excitations. By using the multi-dimensional harmonic balance method (MHBM) and the Chebyshev inclusion function (CIF), an interval multi-dimensional harmonic balance method (IMHBM) is obtained. To illustrate the accuracy of the proposed method, a time-varying geared system of wind turbine with different kinds of uncertainties is demonstrated. By comparing with the results of the scanning method, it is shown that the presented method is valid and effective for the parametrically excited system with uncertainties and multi-frequency excitations. The effects of some uncertain system parameters including uncertain mesh stiffnesses and uncertain bearing stiffnesses on the frequency responses of the system are also discussed in detail. It is shown that the dynamic responses of the system are insensitive to the uncertain mesh stiffness and bearing stiffnesses of the planetary gear stage. The uncertain bearing stiffnesses of the intermediate and high-speed stages will lead to relatively large uncertainties in the dynamic responses around resonant regions. It will provide valuable guidance for the optimal design and condition monitoring of wind turbine gearboxes.
Parametric receiver operating characteristic curve analysis using mathematica.
Heckerling, Paul S
2002-07-01
Several computer programs have been written to perform receiver operating characteristic (ROC) curve analysis, and are available in the public domain. Here, the author provides the theory and description for 'rocMath', a Mathematica program that performs parametric ROC curve analysis. The 'rocMath' program has some advantages over other ROC curve programs, including the ability to provide, through optional arguments: (a) user-specified pointwise confidence limits, as well as default 95% limits, on ROC curve area and on true-positive rates; (b) ROC curve plots with data points, a fitted curve, and user-specified pointwise confidence bands; and (c) ROC curve areas, tables, and plots based on a logistic distribution as well as on a standard normal distribution. In addition, the code of 'rocMath' can be modified to address additional ROC curve applications. The program uses Mathematica's ability to operate on purely symbolic as well as numeric data to achieve substantial coding efficiency. Limitations of the 'rocMath' program are also discussed.
Fanjoux, Gil; Lantz, Eric; Michaud, Jérémy; Sylvestre, Thibaut
2012-11-19
In a way analogous to a light pulse that can be optically delayed via slow light propagation in Kerr-type nonlinear media, we theoretically demonstrate that beam steering and spatial walk-off compensation can be achieved in noncollinear optical parametric amplification. We identify this effect as a result of the quadratic phase shift induced by parametric amplification that leads to the cancellation of the spatial walk-off and collinear propagation of all beams though they have different wavevectors. Experimental evidence is reported of a soliton array steering in a Kerr slab waveguide.
Parametric Analysis of a Hypersonic Inlet using Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Oliden, Daniel
For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study of turbulence models is presented and concludes that the k-kl-omega transition and SST transition turbulence model have the best correlation. Downstream of the flare's shockwave, good correlation is found for all boundary layer profiles, with some slight discrepancies of the static temperature near the surface. Simulated flow fields on a blunt cone with flare above Mach 10 are compared with experimental data from CUBRC LENS hypervelocity shock tunnel. Lack of vibrational non-equilibrium calculations causes discrepancies in heat flux near the leading edge. Temperature profiles, where non-equilibrium effects are dominant, are compared with the dissociation of molecules to show the effects of dissociation on static temperature. Following the validation studies is a parametric analysis of a hypersonic inlet from Mach 6 to 20. Compressor performance is investigated for numerous cowl leading edge locations up to speeds of Mach 10. The variable cowl study showed positive trends in compressor performance parameters for a range of Mach numbers that arise from maximizing the intake of compressed flow. An interesting phenomenon due to the change in shock wave formation for different Mach numbers developed inside the cowl that had a negative influence on the total pressure recovery. Investigation of the hypersonic inlet at different altitudes is performed to study the effects of Reynolds number, and consequently, turbulent viscous effects on compressor performance. Turbulent boundary layer separation was noted as the cause for a change in compressor performance parameters due to a change in Reynolds number. This effect would not be
Parametric systems analysis of the Modular Stellarator Reactor (MSR)
Miller, R.L.; Krakowski, R.A.; Bathke, C.G.
1982-05-01
The close coupling in the stellarator/torsatron/heliotron (S/T/H) between coil design (peak field, current density, forces), magnetics topology (transform, shear, well depth), and plasma performance (equilibrium, stability, transport, beta) complicates the reactor assessment more so than for most magnetic confinement systems. In order to provide an additional degree of resolution of this problem for the Modular Stellarator Reactor (MSR), a parametric systems model has been developed and applied. This model reduces key issues associted ith plasma performance, first-wall/blanket/shield (FW/B/S), and coil design to a simple relationship between beta, system geometry, and a number of indicators of overall plant performance. The results of this analysis can then be used to guide more detailed, multidimensional plasma, magnetics, and coil design efforts towards technically and economically viable operating regimes. In general, it is shown that beta values > 0.08 may be needed if the MSR approach is to be substantially competitive with other approaches to magnetic fusion in terms of system power density, mass utilization, and cost for total power output around 4.0 GWt; lower powers will require even higher betas.
Parametric analysis of a passive cyclic control device for helicopters
NASA Technical Reports Server (NTRS)
Kumagai, H.
1984-01-01
A parametric study of a passive device which provides a cyclic longitudinal control moment for a helicopter rotor was performed. It utilizes a rotor blade tip which is structurally decoupled from the blade inboard section. This rotor configuration is generally called the Free-Tip Rotor. A two dimensional numerical model was used to review the Constant Lift Tip Rotor, a predecessor of the current configuration, and then the same model was applied to the Passive Cyclic Control Device. The Constant Lift Tip was proven to have the ability to suppress the vibratory lift loading on the tip around the azimuth and to eliminate a significant negative lift peak on the advancing tip. The Passive Cyclic Control Device showed a once-per-revolution lift oscillation with a large amplitude, while minimizing the higher harmonic terms of the lift oscillation. This once-per-revolution oscillation results in the cyclic moment to trim the rotor longitudinally. A rotor performance analysis was performed by a three dimensional numerical model. It indicated that the vortices shed from the junction between the tip and the inboard section has a strong influence on the tip, and it may severely limit the tip performance. It was also shown that the Free-Tip allows the inboard section to have a larger twist, which results in a better performance.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series
Keshmiri, Soheil; Sumioka, Hidenobu; Yamazaki, Ryuji; Ishiguro, Hiroshi
2017-01-01
We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females. PMID:28217088
A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series.
Keshmiri, Soheil; Sumioka, Hidenobu; Yamazaki, Ryuji; Ishiguro, Hiroshi
2017-01-01
We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
Schwab, K; Eiselt, M; Putsche, P; Helbig, M; Witte, H
2006-12-01
The heart rate variability (HRV) can be taken as an indicator of the coordination of the cardio-respiratory rhythms. Bispectral analysis using a direct (fast Fourier transform based) and time-invariant approach has shown the occurrence of a quadratic phase coupling (QPC) between a low-frequency (LF: 0.1 Hz) and a high-frequency (HF: 0.4-0.6 Hz) component of the HRV during quiet sleep in healthy neonates. The low-frequency component corresponds to the Mayer-Traube-Hering waves in blood pressure and the high-frequency component to the respiratory sinus arrhythmia (RSA). Time-variant, parametric estimation of the bispectrum provides the possibility of quantifying QPC in the time course. Therefore, the aim of this work was a parametric, time-variant bispectral analysis of the neonatal HRV in the same neonates used in the direct, time-invariant approach. For the first time rhythms in the time course of QPC between the HF component and the LF component could be shown in the neonatal HRV.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Energy harvesting using parametric resonant system due to time-varying damping
NASA Astrophysics Data System (ADS)
Scapolan, Matteo; Tehrani, Maryam Ghandchi; Bonisoli, Elvio
2016-10-01
In this paper, the problem of energy harvesting is considered using an electromechanical oscillator. The energy harvester is modelled as a spring-mass-damper, in which the dissipated energy in the damper can be stored rather than wasted. Previous research provided the optimum damping parameter, to harvest maximum amount of energy, taking into account the stroke limit of the device. However, the amount of the maximum harvested energy is limited to a single frequency in which the device is tuned. Active and semi-active strategies have been suggested, which increases the performance of the harvester. Recently, nonlinear damping in the form of cubic damping has been proposed to extend the dynamic range of the harvester. In this paper, a periodic time-varying damper is introduced, which results in a parametrically excited system. When the frequency of the periodic time-varying damper is twice the excitation frequency, the system internal energy increases proportionally to the energy already stored in the system. Thus, for certain parametric damping values, the system can become unstable. This phenomenon can be exploited for energy harvesting. The transition curves, which separate the stable and unstable dynamics are derived, both analytically using harmonic balance method, and numerically using time simulations. The design of the harvester is such that its response is close to the transition curves of the Floquet diagram, leading to stable but resonant system. The performance of the parametric harvester is compared with the non-parametric one. It is demonstrated that performances and the frequency bandwidth in which the energy can be harvested can be both increased using time-varying damping.
System Availability: Time Dependence and Statistical Inference by (Semi) Non-Parametric Methods
1988-08-01
Technical FROM -TO 1988 August T 42 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block...availability in finite time (not steady-state or long -run), and to non-parametric estimates. 20 DISTRIBUTION, AVAILABILITY OF ABSTRACT 21 ABSTRACT...productivity of commercial nuclear power plants; in that arena it is quantified by probabilistic risk assessment (PRA). Relaued finite state
Hollow cathode modeling: II. Physical analysis and parametric study
NASA Astrophysics Data System (ADS)
Sary, Gaétan; Garrigues, Laurent; Boeuf, Jean-Pierre
2017-05-01
A numerical emissive hollow cathode model which couples plasma and thermal aspects of the NASA NSTAR cathode has been presented in a companion paper and simulation results obtained using the plasma model were compared to experimental data. We now compare simulation results with measurements using the full coupled model. Inside the cathode, the simulated plasma density profile agrees with the experimental data up to the ±50% experimental uncertainty while the simulated emitter temperature differs from measurements by at most 5 K. We then proceed to an analysis of the cathode discharge both inside the cathode where electron emission is dominant and outside in the near plume where electron transport instabilities are important. As observed previously in the literature, the total emitted electron current is much larger (34 {{A}}) than the set discharge current collected at the anode (13 {{A}}) while ionization plays a negligible role. Extracted electrons are emitted from a region much shorter than the full emitter (0.9 {{cm}} versus 2.5 {{cm}}). The influence of an applied axial magnetic field in the plume is also assessed and we observe that it leads to a 10-fold increase of the plasma density 1 cm downstream of the orifice entrance while the simulated discharge potential at the anode is increased from 10 {{V}} up to 35.5 {{V}}. Lastly, we perform a parametric study on both the operating point (discharge current, mass flow rate) and design (inner radius) of the cathode. The simulated useful operating envelope is shown to be limited at low discharge current mostly because of the probable ion sputtering of the emitter and at high discharge current because of emitter evaporation, plasma oscillations and sputtering of the keeper electrode. The behavior of the cathode is also analyzed w.r.t. its internal radius and simulation results show that the useful emitter length scales linearly with the cathode radius.
NASA Astrophysics Data System (ADS)
Kraft, Manuel; Hein, Sven M.; Lehnert, Judith; Schöll, Eckehard; Hughes, Stephen; Knorr, Andreas
2016-08-01
Quantum coherent feedback control is a measurement-free control method fully preserving quantum coherence. In this paper we show how time-delayed quantum coherent feedback can be used to control the degree of squeezing in the output field of a cavity containing a degenerate parametric oscillator. We focus on the specific situation of Pyragas-type feedback control where time-delayed signals are fed back directly into the quantum system. Our results show how time-delayed feedback can enhance or decrease the degree of squeezing as a function of time delay and feedback strength.
SAT-Based (Parametric) Reachability for a Class of Distributed Time Petri Nets
NASA Astrophysics Data System (ADS)
Penczek, Wojciech; Pòłrola, Agata; Zbrzezny, Andrzej
Formal methods - among them the model checking techniques - play an important role in the design and production of both systems and software. In this paper we deal with an adaptation of the bounded model checking methods for timed systems, developed for timed automata, to the case of time Petri nets. We consider distributed time Petri nets and parametric reachability checking, but the approach can be easily adapted to verification of other kinds of properties for which the bounded model checking methods exist. A theoretical description is supported by some experimental results, generated using an extension of the model checker verICS.
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
NASA Astrophysics Data System (ADS)
Liu, Shuang; Wang, Jin-Jin; Liu, Jin-Jie; Li, Ya-Qian
2015-10-01
In the present work, we investigate the nonlinear parametrically excited vibration and active control of a gear pair system involving backlash, time-varying meshing stiffness and static transmission error. Firstly, a gear pair model is established in a strongly nonlinear form, and its nonlinear vibration characteristics are systematically investigated through different approaches. Several complicated phenomena such as period doubling bifurcation, anti period doubling bifurcation and chaos can be observed under the internal parametric excitation. Then, an active compensation controller is designed to suppress the vibration, including the chaos. Finally, the effectiveness of the proposed controller is verified numerically. Project supported by the National Natural Science Foundation of China (Grant No. 61104040), the Natural Science Foundation of Hebei Province, China (Grant No. E2012203090), and the University Innovation Team of Hebei Province Leading Talent Cultivation Project, China (Grant No. LJRC013).
Parametric analysis of the statistical model of the stick-slip process
NASA Astrophysics Data System (ADS)
Lima, Roberta; Sampaio, Rubens
2017-06-01
In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.
NASA Technical Reports Server (NTRS)
Pandya, Shishir; Chaderjian, Neal; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
A process is described which enables the generation of 35 time-dependent viscous solutions for a YAV-8B Harrier in ground effect in one week. Overset grids are used to model the complex geometry of the Harrier aircraft and the interaction of its jets with the ground plane and low-speed ambient flow. The time required to complete this parametric study is drastically reduced through the use of process automation, modern computational platforms, and parallel computing. Moreover, a dual-time-stepping algorithm is described which improves solution robustness. Unsteady flow visualization and a frequency domain analysis are also used to identify and correlated key flow structures with the time variation of lift.
Evaluation of parametric models by the prediction error in colorectal cancer survival analysis
Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Orooji, Arezoo; Pourhoseingholi, Mohamad Amin; Zali, Mohammad Reza
2015-01-01
Aim: The aim of this study is to determine the factors influencing predicted survival time for patients with colorectal cancer (CRC) using parametric models and select the best model by predicting error’s technique. Background: Survival models are statistical techniques to estimate or predict the overall time up to specific events. Prediction is important in medical science and the accuracy of prediction is determined by a measurement, generally based on loss functions, called prediction error. Patients and methods: A total of 600 colorectal cancer patients who admitted to the Cancer Registry Center of Gastroenterology and Liver Disease Research Center, Taleghani Hospital, Tehran, were followed at least for 5 years and have completed selected information for this study. Body Mass Index (BMI), Sex, family history of CRC, tumor site, stage of disease and histology of tumor included in the analysis. The survival time was compared by the Log-rank test and multivariate analysis was carried out using parametric models including Log normal, Weibull and Log logistic regression. For selecting the best model, the prediction error by apparent loss was used. Results: Log rank test showed a better survival for females, BMI more than 25, patients with early stage at diagnosis and patients with colon tumor site. Prediction error by apparent loss was estimated and indicated that Weibull model was the best one for multivariate analysis. BMI and Stage were independent prognostic factors, according to Weibull model. Conclusion: In this study, according to prediction error Weibull regression showed a better fit. Prediction error would be a criterion to select the best model with the ability to make predictions of prognostic factors in survival analysis. PMID:26328040
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories.
Time-domain semi-parametric estimation based on a metabolite basis set.
Ratiney, H; Sdika, M; Coenradie, Y; Cavassila, S; van Ormondt, D; Graveron-Demilly, D
2005-02-01
A novel and fast time-domain quantitation algorithm--quantitation based on semi-parametric quantum estimation (QUEST)--invoking optimal prior knowledge is proposed and tested. This nonlinear least-squares algorithm fits a time-domain model function, made up from a basis set of quantum-mechanically simulated whole-metabolite signals, to low-SNR in vivo data. A basis set of in vitro measured signals can be used too. The simulated basis set was created with the software package NMR-SCOPE which can invoke various experimental protocols. Quantitation of 1H short echo-time signals is often hampered by a background signal originating mainly from macromolecules and lipids. Here, we propose and compare three novel semi-parametric approaches to handle such signals in terms of bias-variance trade-off. The performances of our methods are evaluated through extensive Monte-Carlo studies. Uncertainty caused by the background is accounted for in the Cramér-Rao lower bounds calculation. Valuable insight about quantitation precision is obtained from the correlation matrices. Quantitation with QUEST of 1H in vitro data, 1H in vivo short echo-time and 31P human brain signals at 1.5 T, as well as 1H spectroscopic imaging data of human brain at 1.5 T, is demonstrated.
Non-parametric estimation of a time-dependent predictive accuracy curve.
Saha-Chaudhuri, P; Heagerty, P J
2013-01-01
A major biomedical goal associated with evaluating a candidate biomarker or developing a predictive model score for event-time outcomes is to accurately distinguish between incident cases from the controls surviving beyond t throughout the entire study period. Extensions of standard binary classification measures like time-dependent sensitivity, specificity, and receiver operating characteristic (ROC) curves have been developed in this context (Heagerty, P. J., and others, 2000. Time-dependent ROC curves for censored survival data and a diagnostic marker. Biometrics 56, 337-344). We propose a direct, non-parametric method to estimate the time-dependent Area under the curve (AUC) which we refer to as the weighted mean rank (WMR) estimator. The proposed estimator performs well relative to the semi-parametric AUC curve estimator of Heagerty and Zheng (2005. Survival model predictive accuracy and ROC curves. Biometrics 61, 92-105). We establish the asymptotic properties of the proposed estimator and show that the accuracy of markers can be compared very simply using the difference in the WMR statistics. Estimators of pointwise standard errors are provided.
Kvist, Kajsa; Gerster, Mette; Andersen, Per Kragh; Kessing, Lars Vedel
2007-12-30
For recurrent events there is evidence that misspecification of the frailty distribution can cause severe bias in estimated regression coefficients (Am. J. Epidemiol 1998; 149:404-411; Statist. Med. 2006; 25:1672-1684). In this paper we adapt a procedure originally suggested in (Biometrika 1999; 86:381-393) for parallel data for checking the gamma frailty to recurrent events. To apply the model checking procedure, a consistent non-parametric estimator for the marginal gap time distributions is needed. This is in general not possible due to induced dependent censoring in the recurrent events setting, however, in (Biometrika 1999; 86:59-70) a non-parametric estimator for the joint gap time distributions based on the principle of inverse probability of censoring weights is suggested. Here, we attempt to apply this estimator in the model checking procedure and the performance of the method is investigated with simulations and applied to Danish registry data. The method is further investigated using the usual Kaplan-Meier estimator and a marginalized estimator for the marginal gap time distributions. We conclude that the procedure only works when the recurrent event is common and when the intra-individual association between gap times is weak.
Efficient parametric analysis of the chemical master equation through model order reduction.
Waldherr, Steffen; Haasdonk, Bernard
2012-07-02
Stochastic biochemical reaction networks are commonly modelled by the chemical master equation, and can be simulated as first order linear differential equations through a finite state projection. Due to the very high state space dimension of these equations, numerical simulations are computationally expensive. This is a particular problem for analysis tasks requiring repeated simulations for different parameter values. Such tasks are computationally expensive to the point of infeasibility with the chemical master equation. In this article, we apply parametric model order reduction techniques in order to construct accurate low-dimensional parametric models of the chemical master equation. These surrogate models can be used in various parametric analysis task such as identifiability analysis, parameter estimation, or sensitivity analysis. As biological examples, we consider two models for gene regulation networks, a bistable switch and a network displaying stochastic oscillations. The results show that the parametric model reduction yields efficient models of stochastic biochemical reaction networks, and that these models can be useful for systems biology applications involving parametric analysis problems such as parameter exploration, optimization, estimation or sensitivity analysis.
NASA Astrophysics Data System (ADS)
Vanvinckenroye, H.; Denoёl, V.
2017-10-01
A linear oscillator simultaneously subjected to stochastic forcing and parametric excitation is considered. The time required for this system to evolve from a low initial energy level until a higher energy state for the first time is a random variable. Its expectation satisfies the Pontryagin equation of the problem, which is solved with the asymptotic expansion method developed by Khasminskii. This allowed deriving closed-form expressions for the expected first passage time. A comprehensive parameter analysis of these solutions is performed. Beside identifying the important dimensionless groups governing the problem, it also highlights three important regimes which are called incubation, multiplicative and additive because of their specific features. Those three regimes are discussed with the parameters of the problem.
Parametric Studies of Square Solar Sails Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Sleight, David W.; Muheim, Danniella M.
2004-01-01
Parametric studies are performed on two generic square solar sail designs to identify parameters of interest. The studies are performed on systems-level models of full-scale solar sails, and include geometric nonlinearity and inertia relief, and use a Newton-Raphson scheme to apply sail pre-tensioning and solar pressure. Computational strategies and difficulties encountered during the analyses are also addressed. The purpose of this paper is not to compare the benefits of one sail design over the other. Instead, the results of the parametric studies may be used to identify general response trends, and areas of potential nonlinear structural interactions for future studies. The effects of sail size, sail membrane pre-stress, sail membrane thickness, and boom stiffness on the sail membrane and boom deformations, boom loads, and vibration frequencies are studied. Over the range of parameters studied, the maximum sail deflection and boom deformations are a nonlinear function of the sail properties. In general, the vibration frequencies and modes are closely spaced. For some vibration mode shapes, local deformation patterns that dominate the response are identified. These localized patterns are attributed to the presence of negative stresses in the sail membrane that are artifacts of the assumption of ignoring the effects of wrinkling in the modeling process, and are not believed to be physically meaningful. Over the range of parameters studied, several regions of potential nonlinear modal interaction are identified.
NASA Astrophysics Data System (ADS)
Cunningham, Robert K.; Waxman, Allen M.
1991-06-01
This is the first Annual Technical Summary of the MIT Lincoln Laboratory effort into the parametric study of diffusion-enhancement networks for spatiotemporal grouping in real-time artificial vision. Spatiotemporal grouping phenomena are examined in the context of static and time-varying imagery. Dynamics that exhibit static feature grouping on multiple scales as a function of time and long-range apparent motion between time-varying inputs are developed for a biologically plausible diffusion-enhancement bilayer. The architecture consists of a diffusion and a contrast-enhancement layer coupled by feedforward and feedback connections: input is provided by a separate feature extracting layer. The model is cast as an analog circuit that is realizable in VLSI, the parameters of which are selected to satisfy a psychophysical database on apparent motion. Specific topics include: neural networks, astrocyte glial networks, diffusion enhancement, long-range apparent motion, spatiotemporal grouping dynamics, and interference suppression.
Parametric Design and Mechanical Analysis of Beams based on SINOVATION
NASA Astrophysics Data System (ADS)
Xu, Z. G.; Shen, W. D.; Yang, D. Y.; Liu, W. M.
2017-07-01
In engineering practice, engineer needs to carry out complicated calculation when the loads on the beam are complex. The processes of analysis and calculation take a lot of time and the results are unreliable. So VS2005 and ADK are used to develop a software for beams design based on the 3D CAD software SINOVATION with C ++ programming language. The software can realize the mechanical analysis and parameterized design of various types of beams and output the report of design in HTML format. Efficiency and reliability of design of beams are improved.
Inverse synthetic aperture radar processing using parametric time-frequency estimators Phase I
Candy, J.V., LLNL
1997-12-31
This report summarizes the work performed for the Office of the Chief of Naval Research (ONR) during the period of 1 September 1997 through 31 December 1997. The primary objective of this research was aimed at developing an alternative time-frequency approach which is recursive-in-time to be applied to the Inverse Synthethic Aperture Radar (ISAR) imaging problem discussed subsequently. Our short term (Phase I) goals were to: 1. Develop an ISAR stepped-frequency waveform (SFWF) radar simulator based on a point scatterer vehicular target model incorporating both translational and rotational motion; 2. Develop a parametric, recursive-in-time approach to the ISAR target imaging problem; 3. Apply the standard time-frequency short-term Fourier transform (STFT) estimator, initially to a synthesized data set; and 4. Initiate the development of the recursive algorithm. We have achieved all of these goals during the Phase I of the project and plan to complete the overall development, application and comparison of the parametric approach to other time-frequency estimators (STFT, etc.) on our synthesized vehicular data sets during the next phase of funding. It should also be noted that we developed a batch minimum variance translational motion compensation (TMC) algorithm to estimate the radial components of target motion (see Section IV). This algorithm is easily extended to recursive solution and will probably become part of the overall recursive processing approach to solve the ISAR imaging problem. Our goals for the continued effort are to: 1. Develop and extend a complex, recursive-in-time, time- frequency parameter estimator based on the recursive prediction error method (RPEM) using the underlying Gauss- Newton algorithms. 2. Apply the complex RPEM algorithm to synthesized ISAR data using the above simulator. 3. Compare the performance of the proposed algorithm to standard time-frequency estimators applied to the same data sets.
AC motor diagnostics system based on complex parametric analysis
NASA Astrophysics Data System (ADS)
Korolev, N. A.; Solovev, S. V.
2017-02-01
The article deals with the principle of evaluation of technical condition, based on a comprehensive analysis of the motor parameters which is a main unit in mechanical engineering. Diagnostics system and residential life assessment of electromechanical equipment is presented based on the AC engine and algorithms of its work. The important challenge of diagnostics remains the well-timed faults detection and maintenance and repair organization. The solution of such challenge remains accuracy and reliability of diagnostic systems.
Time resolved imaging using non-collinear parametric down-conversion
NASA Astrophysics Data System (ADS)
Park, Jung-Rae
In this thesis I present a method for measuring the time resolved spatial profile of a single laser pulse and its application to the semiconductor devices. In OMEGA laser system, spatial profile of a laser beam can change as a function of time due to spontaneous effects such as the B-integral or imposed effects such as smoothing by spectral dispersion. The method presented here uses a non-collinear parametric down-conversion process to multiply sample a single laser pulse. In the non-collinear parametric down-conversion process, an infrared laser beam at 1064 nm is mixed with an intense ultraviolet beam at 351 nm to generate the green signal beam at 524 nm. Calculations have been carried out to determine the threshold power of the infrared probe beam for generating a detectable signal beam. The generated green beam is captured by a cooled optical multichannel analyzer camera and the image of signal beam is analyzed. This temporal spatial measurement can also be applied to the dynamic image detection schemes of semiconductor devices.
Parametric estimation of pulse arrival time: a robust approach to pulse wave velocity.
Solà, Josep; Vetter, Rolf; Renevey, Philippe; Chételat, Olivier; Sartori, Claudio; Rimoldi, Stefano F
2009-07-01
Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.
Bandeen-Roche, Karen; Ning, Jing
2008-03-01
Most research on the study of associations among paired failure times has either assumed time invariance or been based on complex measures or estimators. Little has accommodated competing risks. This paper targets the conditional cause-specific hazard ratio, henceforth called the cause-specific cross ratio, a recent modification of the conditional hazard ratio designed to accommodate competing risks data. Estimation is accomplished by an intuitive, non-parametric method that localizes Kendall's tau. Time variance is accommodated through a partitioning of space into 'bins' between which the strength of association may differ. Inferential procedures are developed, small-sample performance is evaluated and the methods are applied to the investigation of familial association in dementia onset.
Parametric mapping and quantitative analysis of the human calvarium.
Voie, Arne; Dirnbacher, Maximilian; Fisher, David; Hölscher, Thilo
2014-12-01
In this paper we report how thickness and density vary over the calvarium region of a collection of human skulls. Most previous reports involved a limited number of skulls, with a limited number of measurement sites per skull, so data in the literature are sparse. We collected computer tomography (CT) scans of 51 ex vivo human calvaria, and analyzed these in silico using over 2000 measurement sites per skull. Thickness and density were calculated at these sites, for the three skull layers separately and combined, and were mapped parametrically onto the skull surfaces to examine the spatial variations per skull. These were found to be highly variable, and unique descriptors of the individual skulls. Of the three skull layers, the thickness of the inner cortical layer was found to be the most variable, while the least variable was the outer cortical density.
Parametric analysis of EMP induced overvoltages on power lines
Millard, D.P.; Meliopoulos, A.P. ); Cokkinides, G.J. )
1988-07-01
This paper presents parametric results of EMP induced overvoltages on overhead transmission lines. The results have been obtained with an analytical model of an overhead transmission line which accounts for (1) frequency dependent characteristics of lines, (2) EMP coupling to the line conductors, (3) EMP coupling to tower structures, and (4) the grounding structures of the transmission towers. The results indicate that shield or neutral conductors and transmission tower grounding drastically reduce the EMP induced overvoltages on transmission towers. For distribution overhead circuits, the EMP induced overvoltages may be above the basic insulation level. For transmission circuits, it is possible to select design parameters such as to assure that EMP induced overvoltages are below the insulation level of the line.
Parametric analysis of a packed bed thermal energy storage system
NASA Astrophysics Data System (ADS)
Ortega-Fernández, Iñigo; Loroño, Iñaki; Faik, Abdessamad; Uriz, Irantzu; Rodríguez-Aseguinolaza, Javier; D'Aguanno, Bruno
2017-06-01
Even if the packed bed thermal energy storage concept has been introduced as a promising technology in the concentrated solar power field in the last years, its full deployment in commercial plants presents a clear improvement potential. In order to overcome the under-development of this storage technology, this work attempts to show the great capabilities of packed bed heat storage units after a successful design and operational parametric optimization procedure. The obtained results show that a correct design of this type of facilities together with a successful operation method, allow to increase significantly the storage capacity reaching an overall efficiency higher than 80 %. The design guideline obtained as a result of this work could open new objectives and applications for the packed bed storage technology as it represents a cost-effective and highly performing storage alternative.
Parametric Rietveld refinement
Stinton, Graham W.; Evans, John S. O.
2007-01-01
In this paper the method of parametric Rietveld refinement is described, in which an ensemble of diffraction data collected as a function of time, temperature, pressure or any other variable are fitted to a single evolving structural model. Parametric refinement offers a number of potential benefits over independent or sequential analysis. It can lead to higher precision of refined parameters, offers the possibility of applying physically realistic models during data analysis, allows the refinement of ‘non-crystallographic’ quantities such as temperature or rate constants directly from diffraction data, and can help avoid false minima. PMID:19461841
Dynamic Analysis of Resonance: Bifurcation Characteristics of Non-linear Parametric Systems
NASA Astrophysics Data System (ADS)
Hortel, M.; Škuderová, A.; Kratochvíl, C.; Houfek, M.
The dynamic analysis is an important part of basic research of complex planetary transmission systems with split power flow. The bifurcation characteristics of the resonance courses especially for high-speed weakly and strongly non-linear parametric and in the damping time-heteronymous systems are highly sensitive to their parameters, i.e. to the quality and quantity of their bifurcation features and ambiguities. In the case of mass discretization, their analytical—numerical solution leads to complex integro-differential equations with solving kernels in the form of Green's resolventes and complex simulation models in MATLAB/Simulink. The case of one branch of the planetary transmission system with six degrees of freedom is analysed in terms of internal dynamics in this paper, i.e. the causes of the quantity and quality of resonance bifurcation curves and formation of ambiguity characteristics of relative motion in gear meshes.
NASA Astrophysics Data System (ADS)
Prasad, Narasimha Srikantaiah
parametric interaction is analytically formulated and experimentally demonstrated using Nd:MgO:LiNbO_3. The results obtained form a sound basis to subsequent analysis of parametric interaction by a pump radiation that is generated internally in the same crystal. Using laser theory and principles of optical parametric interaction, the theory of self-pumped optical parametric interaction is formulated. This encompasses, the requirements of an interaction medium, laser pump generation, Q-switching, cavity analysis, and conditions for parametric interaction. Driven by an internally generated laser pump, the specific processes of optical parametic amplification, optical parametric oscillation, and frequency up-conversion are explored. In this study, novel tuning techniques are considered and spectral performance characteristics of these devices are presented. The design architectures of self-pumped OPO, OPA, and frequency up-converter devices using Nd:MgO:LiNbO _3 crystals are described. It is envisaged that self-pumped parametric devices can outperform present day intra-cavity devices which are bulky and expensive.
Real-time tuning of a double quantum dot using a Josephson parametric amplifier
NASA Astrophysics Data System (ADS)
Stehlik, J.; Liu, Y.-Y.; Quintana, C. M.; Eichler, C.; Hartke, T. R.; Petta, J. R.
Josephson parametric amplifiers (JPAs) have enabled advances in readout of quantum systems. Here we demonstrate JPA-assisted readout of a cavity-coupled double quantum dot (DQD). Utilizing a JPA we improve the signal-to-noise ratio (SNR) by a factor of 2000 compared to the situation with the parametric amplifier turned off. At an interdot charge transition we achieve a SNR of 76 (19 dB) with an integration time τ = 400 ns, which is limited by the linewidth of our cavity. By measuring the SNR as a function of τ we extract an equivalent charge sensitivity of 8 ×10-5 e /√{ Hz} . We develop a dual-gate-voltage rastering scheme that allows us to acquire a DQD charge stability diagram in just 20 ms. Such rapid data acquisition rates enable device tuning in live ``video-mode,'' where the results of parameter changes are immediately displayed. Live tuning allows the DQD confinement potential to be rapidly tuned, a capability that will become increasingly important as semiconductor spin qubits are scaled to a larger number of dots. Research is supported by the Packard Foundation, ARO Grant No. W911NF-15-1-0149, DARPA QuEST Grant No. HR0011-09-1-0007, and the NSF (Grants No. DMR-1409556 and DMR-1420541).
ERIC Educational Resources Information Center
Ferrer, Alvaro J. Arce; Wang, Lin
This study compared the classification performance among parametric discriminant analysis, nonparametric discriminant analysis, and logistic regression in a two-group classification application. Field data from an organizational survey were analyzed and bootstrapped for additional exploration. The data were observed to depart from multivariate…
Semi-parametric estimation of age-time specific infection incidence from serial prevalence data.
Nagelkerke, N; Heisterkamp, S; Borgdorff, M; Broekmans, J; Van Houwelingen, H
1999-02-15
Many infections cause lasting detectable immune responses, whose prevalence can be estimated from cross-sectional surveys. However, such surveys do not provide direct information on the incidence of infection. We address the issue of estimating age and time specific incidence from a series of prevalence surveys under the assumption that incidence changes exponentially with time, but make no assumption about the age specific incidence. We show that these assumptions lead to a proportional hazards model and estimate its parameters using semi-parametric maximum likelihood methods. The method is applied to tuberculin surveys in The Netherlands to explore age dependence of the risk of tuberculous infection in the presence of a strong secular decline in this risk.
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Lunar lander configuration study and parametric performance analysis
NASA Astrophysics Data System (ADS)
Donahue, Benjamin B.; Fowler, C. R.
1993-06-01
Future Lunar exploration plans will call for delivery of significant mounts or cargo to provide for crew habitation, surface tansportation, and scientific exploration activities. Minimization of costly surface based infrastructure is in large part directly related to the design of the cargo delivery/landing craft. This study focused on evaluating Lunar lander concepts from a logistics oriented perspective, and outlines the approach used in the development of a preferred configuration, sets forth the benefits derived from its utilization and describes the missions and system considered. Results indicate that only direct-to-surface downloading of payloads provides for unassisted cargo removal operations imperative to efficient and low risk site buildup, including the emplacement of Space Station derivative surface habitat modules, immediate cargo jettison for both descent abort and emergency surface ascent essential to piloted missions carrying cargo, and short habitat egress/ingress paths necessary to productive surface work tours for crew members carrying hand held experiments, tools and other bulky articles. Accommodating cargo in a position underneath the vehicles structural frame, landing craft described herein eliminate altogether the necessity for dedicated surface based off-loading vehicles, the operations and maintenance associated with their operation, and the precipitous ladder climbs to and from the surface that are inherent to traditional designs. Parametric evaluations illustrate performance and mass variation with respect to mission requirements.
Lunar lander configuration study and parametric performance analysis
NASA Technical Reports Server (NTRS)
Donahue, Benjamin B.; Fowler, C. R.
1993-01-01
Future Lunar exploration plans will call for delivery of significant mounts or cargo to provide for crew habitation, surface tansportation, and scientific exploration activities. Minimization of costly surface based infrastructure is in large part directly related to the design of the cargo delivery/landing craft. This study focused on evaluating Lunar lander concepts from a logistics oriented perspective, and outlines the approach used in the development of a preferred configuration, sets forth the benefits derived from its utilization and describes the missions and system considered. Results indicate that only direct-to-surface downloading of payloads provides for unassisted cargo removal operations imperative to efficient and low risk site buildup, including the emplacement of Space Station derivative surface habitat modules, immediate cargo jettison for both descent abort and emergency surface ascent essential to piloted missions carrying cargo, and short habitat egress/ingress paths necessary to productive surface work tours for crew members carrying hand held experiments, tools and other bulky articles. Accommodating cargo in a position underneath the vehicles structural frame, landing craft described herein eliminate altogether the necessity for dedicated surface based off-loading vehicles, the operations and maintenance associated with their operation, and the precipitous ladder climbs to and from the surface that are inherent to traditional designs. Parametric evaluations illustrate performance and mass variation with respect to mission requirements.
NASA Technical Reports Server (NTRS)
Marston, C. H.; Alyea, F. N.; Bender, D. J.; Davis, L. K.; Dellinger, T. C.; Hnat, J. G.; Komito, E. H.; Peterson, C. A.; Rogers, D. A.; Roman, A. J.
1980-01-01
The performance and cost of moderate technology coal-fired open cycle MHD/steam power plant designs which can be expected to require a shorter development time and have a lower development cost than previously considered mature OCMHD/steam plants were determined. Three base cases were considered: an indirectly-fired high temperature air heater (HTAH) subsystem delivering air at 2700 F, fired by a state of the art atmospheric pressure gasifier, and the HTAH subsystem was deleted and oxygen enrichment was used to obtain requisite MHD combustion temperature. Coal pile to bus bar efficiencies in ease case 1 ranged from 41.4% to 42.9%, and cost of electricity (COE) was highest of the three base cases. For base case 2 the efficiency range was 42.0% to 45.6%, and COE was lowest. For base case 3 the efficiency range was 42.9% to 44.4%, and COE was intermediate. The best parametric cases in bases cases 2 and 3 are recommended for conceptual design. Eventual choice between these approaches is dependent on further evaluation of the tradeoffs among HTAH development risk, O2 plant integration, and further refinements of comparative costs.
NASA Astrophysics Data System (ADS)
Marston, C. H.; Alyea, F. N.; Bender, D. J.; Davis, L. K.; Dellinger, T. C.; Hnat, J. G.; Komito, E. H.; Peterson, C. A.; Rogers, D. A.; Roman, A. J.
1980-02-01
The performance and cost of moderate technology coal-fired open cycle MHD/steam power plant designs which can be expected to require a shorter development time and have a lower development cost than previously considered mature OCMHD/steam plants were determined. Three base cases were considered: an indirectly-fired high temperature air heater (HTAH) subsystem delivering air at 2700 F, fired by a state of the art atmospheric pressure gasifier, and the HTAH subsystem was deleted and oxygen enrichment was used to obtain requisite MHD combustion temperature. Coal pile to bus bar efficiencies in ease case 1 ranged from 41.4% to 42.9%, and cost of electricity (COE) was highest of the three base cases. For base case 2 the efficiency range was 42.0% to 45.6%, and COE was lowest. For base case 3 the efficiency range was 42.9% to 44.4%, and COE was intermediate. The best parametric cases in bases cases 2 and 3 are recommended for conceptual design. Eventual choice between these approaches is dependent on further evaluation of the tradeoffs among HTAH development risk, O2 plant integration, and further refinements of comparative costs.
A PI tuning rule for integrating plus dead time processes with parametric uncertainty.
Mercader, Pedro; Baños, Alfonso
2017-03-01
A novel method to tune a Proportional-Integral (PI) compensator for an integrating plus dead time (IPDT) process, in presence of interval parametric uncertainty, is presented. The design is based on optimization of load disturbance rejection with constraints on the magnitude of the sensitivity and complementary sensitivity functions, that must be satisfied for any element belonging to a set of plants. Instead of solving this problem with a brute force approach (grid the uncertainty set), we prove that this problem can be solved by considering only two plants. That lets us to obtain a tuning rule, after using some approximations. To conclude, some examples will be given in order to elucidate the usefulness of the proposed tuning rule.
Network of time-multiplexed optical parametric oscillators as a coherent Ising machine
NASA Astrophysics Data System (ADS)
Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa
2014-12-01
Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.
NASA Astrophysics Data System (ADS)
Wei, Dang; Qing, Liao; Peng-Cheng, Mao; Hong-Bing, Fu; Yu-Xiang, Weng
2016-05-01
Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spectrum correction function. These advantages should benefit the study of coherent emission, such as measurement of lasing dynamics. In this letter, the FNOPAS was used to trace the lasing process in Rhodamine 6G (R6G) solution and organic semiconductor nano-wires. High-quality transient emission spectra and lasing dynamic traces were acquired, which demonstrates the applicability of FNOPAS in the study of lasing dynamics. Our work extends the application scope of the FNOPAS technique. Project supported by the National Natural Science Foundation of China (Grant Nos. 20925313 and 21503066), the Innovation Program of Chinese Academy of Sciences (Grant No. KJCX2-YW-W25), the Postdoctoral Project of Hebei University, China, and the Project of Science and Technology Bureau of Baoding City, China (Grant No. 15ZG029).
Theoretical Analysis of a Cascaded Continuous-Wave Optical Parametric Oscillator
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Xiao; Xu, Xiaojun; Wang, Hongyan; Jiang, Zongfu
2013-04-01
Threshold and conversion efficiency of a cascaded continuous-wave (CW) optical parametric oscillator (OPO) which can obtain CW terahertz (THz) light are analyzed by the plane wave approach. The model predicts experimental results of the first-order cascaded threshold. The theoretically predicted threshold for the backward idler parametric process agrees with the experimental data. Validation with a high-order cascaded parametric process awaits completion of experiments. At a pump wavelength of 1,030 nm and temperature of 120 °C, the threshold intensity of the forward idler parametric process was 2.2-2.4 times that of the backward process when the period length of the MgO:periodically poled lithium niobate crystal was 24-30 μm. The energy efficiency of CW THz light at a cascade order smaller than 6 is 10-5-10-4. Moreover, efficiency of N cascaded processes can be increased by a factor of N compared with that of a single parametric process, which is limited by the Manley-Rowe relationship. To our knowledge, this is the first theoretical treatment of threshold and energy efficiency of a cascaded CW OPO.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
Augustine, C.
2013-10-01
Parametric analysis of the factors controlling the costs of sedimentary geothermal systems was carried out using a modified version of the Geothermal Electricity Technology Evaluation Model (GETEM). The sedimentary system modeled assumed production from and injection into a single sedimentary formation.
From the time series to the complex networks: The parametric natural visibility graph
NASA Astrophysics Data System (ADS)
Bezsudnov, I. V.; Snarskii, A. A.
2014-11-01
We present the modification of natural visibility graph (NVG) algorithm used for the mapping of the time series to the complex networks (graphs). We propose the parametric natural visibility graph (PNVG) algorithm. The PNVG consists of NVG links, which satisfy an additional constraint determined by a newly introduced continuous parameter-the view angle. The alteration of view angle modifies the PNVG and its properties such as the average node degree, average link length of the graph as well as cluster quantity of built graph, etc. We calculated and analyzed different PNVG properties depending on the view angle for different types of the time series such as the random (uncorrelated, correlated and fractal) and cardiac rhythm time series for healthy and ill patients. Investigation of different PNVG properties shows that the view angle gives a new approach to characterize the structure of the time series that are invisible in the conventional version of the algorithm. It is also shown that the PNVG approach allows us to distinguish, identify and describe in detail various time series.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.
Multimode analysis of the light emitted from a pulsed optical parametric oscillator
Nielsen, Anne E. B.; Moelmer, Klaus
2007-09-15
We present a multimode treatment of the optical parametric oscillator, which is valid for both pulsed and continuous-wave pump fields. The two-time correlation functions of the output field are derived, and we apply the theory to analyze a scheme for heralded production of nonclassical field states that may be subsequently stored in an atomic quantum memory.
A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2003-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.
Feedback interventions and driving speed: A parametric and comparative analysis
Houten, Ron Van; Nau, Paul A.
1983-01-01
Five experiments were conducted to assess the effects of several variables on the efficacy of feedback in reducing driving speed. Experiment 1 systematically varied the criterion used to define speeding, and results showed that the use of a lenient criterion (20 km/hr over the speed limit), which allowed for the posting of high percentages of drivers not speeding, was more effective in reducing speeding than the use of a stringent criterion (10 km/hr over the speed limit). In Experiment 2 an analysis revealed that posting feedback reduced speeding on a limited access highway and the effects persisted to some degree up to 6 km. Experiments 3 and 4 compared the effectiveness of an unmanned parked police vehicle (Experiment 3) and a police air patrol speeding program (Experiment 4) with the feedback sign and determined whether the presence of either of these enforcement variables could potentiate the efficacy of the sign. The results of both experiments demonstrated that although the two enforcement programs initially produced larger effects than the feedback sign, the magnitude of their effect attenuated over time. Experiment 5 compared the effectiveness of a traditional enforcement program with a warning program which included handing out a flier providing feedback on the number and types of accidents occuring on the road during the past year. This experiment demonstrated that the warning program produced a marked reduction in speeding and the traditional enforcement program did not. Furthermore, the warning program and a feedback sign together produced an even greater reduction in speeding than either alone. PMID:16795666
Stephenson, J; Chadwick, B L; Playle, R A; Treasure, E T
2010-01-01
Caries in primary teeth is an ongoing issue in children's dental health. Its quantification is affected by clustering of data within children and the concurrent risk of exfoliation of primary teeth. This analysis of caries data of 103,776 primary molar tooth surfaces from a cohort study of 2,654 British children aged 4-5 years at baseline applied multilevel competing risks survival analysis methodology to identify factors significantly associated with caries occurrence in primary tooth surfaces in the presence of the concurrent risk of exfoliation, and assessed the effect of exfoliation on caries development. Multivariate multilevel parametric survival models were applied at surface level to the analysis of the sound-carious and sound-exfoliation transitions to which primary tooth surfaces are subject. Socio-economic class, fluoridation status and surface type were found to be the strongest predictors of primary caries, with the highest rates of occurrence and lowest median survival times associated with occlusal surfaces of children from poor socio-economic class living in non-fluoridated areas. The concurrent risk of exfoliation was shown to reduce the distinction in survival experience between different types of surfaces, and between surfaces of teeth from children of different socio-economic class or fluoridation status. Clustering of data had little effect on inferences of parameter significance.
A parametric study of nonlinear seismic response analysis of transmission line structures.
Tian, Li; Wang, Yanming; Yi, Zhenhua; Qian, Hui
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures.
A Parametric Study of Nonlinear Seismic Response Analysis of Transmission Line Structures
Wang, Yanming; Yi, Zhenhua
2014-01-01
A parametric study of nonlinear seismic response analysis of transmission line structures subjected to earthquake loading is studied in this paper. The transmission lines are modeled by cable element which accounts for the nonlinearity of the cable based on a real project. Nonuniform ground motions are generated using a stochastic approach based on random vibration analysis. The effects of multicomponent ground motions, correlations among multicomponent ground motions, wave travel, coherency loss, and local site on the responses of the cables are investigated using nonlinear time history analysis method, respectively. The results show the multicomponent seismic excitations should be considered, but the correlations among multicomponent ground motions could be neglected. The wave passage effect has a significant influence on the responses of the cables. The change of the degree of coherency loss has little influence on the response of the cables, but the responses of the cables are affected significantly by the effect of coherency loss. The responses of the cables change little with the degree of the difference of site condition changing. The effect of multicomponent ground motions, wave passage, coherency loss, and local site should be considered for the seismic design of the transmission line structures. PMID:25133215
Stability in parametric resonance of axially moving viscoelastic beams with time-dependent speed
NASA Astrophysics Data System (ADS)
Chen, Li-Qun; Yang, Xiao-Dong
2005-06-01
Stability in transverse parametric vibration of axially accelerating viscoelastic beams is investigated. The governing equation is derived from Newton's second law, the Kelvin constitution relation, and the geometrical relation. When the axial speed is a constant mean speed with small harmonic variations, the governing equation can be regarded as a continuous gyroscopic system under small periodically parametric excitations and a damping term. The method of multiple scales is applied directly to the governing equation without discretization. The stability conditions are obtained for combination and principal parametric resonance. Numerical examples are presented for beams with simple supports and fixed supports, respectively, to demonstrate the effect of viscoelasticity on the stability boundaries in both cases.
Musio, Monica; Sauleau, Erik A; Buemi, Antoine
2010-06-01
We analyse lymphoid leukemia incidence data collected between 1988 and 2002 from the cancer registry of Haut-Rhin, a region in north-east France. For each patient, sex, area of residence, date of birth and date of diagnosis are available. Incidence summaries in the registry are grouped by 3-year periods. A disproportionately large frequency of zeros in the data leads to a lack of fit for Poisson models of relative risk. The aim of our analysis was to model the spatio-temporal variations of the disease taking into account some non-standard requirements, such as count data with many zeros and space-time interactions. For this purpose, we consider a flexible zero-inflated Poisson model for semi-parametric regression which incorporates space-time interactions (modelled by means of varying coefficient model) using an extension of the methodology proposed in Fahrmeir & Osuna (2006, Structured additive regression for overdispersed and zero-inflated count data. Stoc. Models Bus. Ind., 22, 351-369). Inference is carried out from a Bayesian perspective using Markov chain Monte Carlo methods by means of the BayesX software. Our analysis of the geographical distribution of the disease and its evolution in time may be considered as a starting point for further studies.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded.
Toledo, Eran; Jacobs, Lawrence D; Lodato, Joseph A; DeCara, Jeanne M; Coon, Patrick; Mor-Avi, Victor; Lang, Roberto M
2006-06-01
Parametric imaging of myocardial perfusion provides useful visual information for the diagnosis of coronary artery disease (CAD). We developed a technique for automated detection of perfusion defects based on quantitative analysis of parametric perfusion images and validated it against coronary angiography. Contrast-enhanced, apical 2-, 3- and 4-chamber images were obtained at rest and with dipyridamole in 34 patients with suspected CAD. Images were analyzed to generate parametric perfusion images of the standard contrast-replenishment model parameters A, beta and A.beta. Each parametric image was divided into six segments, and mean parameter value (MPV) was calculated for each segment. Segmental MPV ratio between stress and rest was defined as a flow reserve index (FRI). Receiver operating characteristics (ROC) analysis was used in a Study group (N=17) to optimize FRI threshold and the minimal number of abnormal segments per vascular territory (LAD and non-LAD), required for automated detection of stress-induced perfusion defects. The optimized detection algorithm was then tested prospectively in the remaining 17 patients (Test group). LAD and non-LAD stenosis >70% was found in 19 and 17 patients, respectively. In the Study group, FRI threshold was: LAD=0.95 and non-LAD=0.68, minimal number of abnormal segments was four and two, correspondingly. Sensitivity, specificity and accuracy in the Test group were: 75%, 67% and 71% in the LAD, and 75%, 75% and 75% in the non-LAD territories. Automated quantitative analysis of contrast echocardiographic parametric perfusion images is feasible and may aid in the objective detection of CAD.
Haque, Md Mazharul; Washington, Simon
2014-01-01
The use of mobile phones while driving is more prevalent among young drivers-a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q advanced driving simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver's peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21-26 years old and split evenly by gender. Drivers' reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver's age, license type (provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted
Holman, D J; Jones, R E
1998-02-01
We present a form of parametric survival analysis that incorporates exact, interval-censored, and right-censored times to deciduous tooth emergence. The method is an extension of common cross-sectional procedures such as logit and probit analysis, so that data arising from mixed longitudinal and cross-sectional studies can be properly combined. We extended the method to incorporate and estimate a proportion of agenic teeth. While we concentrate on deciduous tooth emergence, the method is relevant to studies of permanent tooth emergence and other developmental events. Deciduous tooth emergence data were analyzed from four longitudinal studies. The samples are 1,271 rural Guatemalan children examined every three months up to age two and every six months thereafter as part of the INCAP study; 397 rural Bangladeshi children examined monthly to age one and quarterly thereafter as part of the Meheran Growth and Development Study; 468 rural Indonesian children examined monthly as part of the Ngaglik study; and 114 urban Japanese children examined monthly in studies from 1910 and 1920. Although all four studies were longitudinal, many observations from the Guatemala and Bangladesh studies were effectively cross-sectionally observed. Three different parametric forms were used to model the eruption process: a normal distribution, a lognormal distribution, and a lognormal distribution with age shifted to shortly after conception. All three distributions produced reliable estimates of central tendencies, but the shifted lognormal distribution produced the best overall estimates of shape (variance) parameters. Estimates of emergence were compared to other studies that used similar methods. Japanese children showed relatively fast emergence times for all teeth. Bangladeshi and Javanese children showed emergence times that were slower than are found in most previous studies. Estimates of agenesis were not significantly different from zero for most teeth. One or two central
Parametric analysis of synthetic aperture radar data for the study of forest stand characteristics
NASA Technical Reports Server (NTRS)
Wu, Shih-Tseng
1988-01-01
A parametric analysis of a Gulf Coast forest stand was performed using multipolarization, multipath airborne SAR data, and forest plot properties. Allometric equations were used to compute the biomass and basal area for the test plots. A multiple regression analysis with stepwise selection of independent variables was performed. It is found that forest stand characteristics such as biomass, basal area, and average tree height are correlated with SAR data.
Time-multiplexed amplification in a hybrid-less and coil-less Josephson parametric converter
NASA Astrophysics Data System (ADS)
Abdo, Baleegh; Chavez-Garcia, Jose M.; Brink, Markus; Keefe, George; Chow, Jerry M.
2017-02-01
Josephson parametric converters (JPCs) are superconducting devices capable of performing nondegenerate, three-wave mixing in the microwave domain without losses. One drawback limiting their use in scalable quantum architectures is the large footprint of the auxiliary circuit needed for their operation, in particular, the use of off-chip, bulky, broadband hybrids and magnetic coils. Here, we realize a JPC that eliminates the need for these bulky components. The pump drive and flux bias are applied in the Hybrid-Less, Coil-Less (HLCL) device through an on-chip, lossless, three-port power divider and an on-chip flux line, respectively. We show that the HLCL design considerably simplifies the circuit and reduces the footprint of the device while maintaining a comparable performance to state-of-the-art JPCs. Furthermore, we exploit the tunable bandwidth property of the JPC and the added capability of applying alternating currents to the flux line in order to switch the resonance frequencies of the device, hence demonstrating time-multiplexed amplification of microwave tones that are separated by more than the dynamical bandwidth of the amplifier. Such a measurement technique can potentially serve to perform a time-multiplexed, high-fidelity readout of superconducting qubits.
Robust stability analysis of linear systems with parametric uncertainty
NASA Astrophysics Data System (ADS)
Zhai, Ding; Zhang, Qing-Ling; Liu, Guo-Yi
2012-09-01
This article is concerned with the problem of robust stability analysis of linear systems with uncertain parameters. By constructing an equivalent system with positive uncertain parameters and using the properties of these parameters, a new stability analysis condition is derived. Due to making use of the properties of uncertain parameters, the new proposed method has potential to give less conservative results than the existing approaches. A numerical example is given to illustrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zhang, Lin; Zhang, Weiping
2016-10-01
A variety of dynamics in nature and society can be approximately treated as a driven and damped parametric oscillator. An intensive investigation of this time-dependent model from an algebraic point of view provides a consistent method to resolve the classical dynamics and the quantum evolution in order to understand the time-dependent phenomena that occur not only in the macroscopic classical scale for the synchronized behaviors but also in the microscopic quantum scale for a coherent state evolution. By using a Floquet U-transformation on a general time-dependent quadratic Hamiltonian, we exactly solve the dynamic behaviors of a driven and damped parametric oscillator to obtain the optimal solutions by means of invariant parameters of Ks to combine with Lewis-Riesenfeld invariant method. This approach can discriminate the external dynamics from the internal evolution of a wave packet by producing independent parametric equations that dramatically facilitate the parametric control on the quantum state evolution in a dissipative system. In order to show the advantages of this method, several time-dependent models proposed in the quantum control field are analyzed in detail.
Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory J.
2014-01-01
PURPOSE: The identification of tumor pathologic characteristics is an important part of breast cancer diagnosis, prognosis, and treatment planning but currently requires biopsy as its standard. Here, we investigated a noninvasive quantitative ultrasound method for the characterization of breast tumors in terms of their histologic grade, which can be used with clinical diagnostic ultrasound data. METHODS: Tumors of 57 locally advanced breast cancer patients were analyzed as part of this study. Seven quantitative ultrasound parameters were determined from each tumor region from the radiofrequency data, including mid-band fit, spectral slope, 0-MHz intercept, scatterer spacing, attenuation coefficient estimate, average scatterer diameter, and average acoustic concentration. Parametric maps were generated corresponding to the region of interest, from which four textural features, including contrast, energy, homogeneity, and correlation, were determined as further tumor characterization parameters. Data were examined on the basis of tumor subtypes based on histologic grade (grade I versus grade II to III). RESULTS: Linear discriminant analysis of the means of the parametric maps resulted in classification accuracy of 79%. On the other hand, the linear combination of the texture features of the parametric maps resulted in classification accuracy of 82%. Finally, when both the means and textures of the parametric maps were combined, the best classification accuracy was obtained (86%). CONCLUSIONS: Textural characteristics of quantitative ultrasound spectral parametric maps provided discriminant information about different types of breast tumors. The use of texture features significantly improved the results of ultrasonic tumor characterization compared to conventional mean values. Thus, this study suggests that texture-based quantitative ultrasound analysis of in vivo breast tumors can provide complementary diagnostic information about tumor histologic characteristics
Lanza, L G; Stagi, L
2012-01-01
The analysis of counting and catching errors of both catching and non-catching types of rain intensity gauges was recently possible over a wide variety of measuring principles and instrument design solutions, based on the work performed during the recent Field Intercomparison of Rainfall Intensity Gauges promoted by World Meteorological Organization (WMO). The analysis reported here concerns the assessment of accuracy and precision of various types of instruments based on extensive calibration tests performed in the laboratory during the first phase of this WMO Intercomparison. The non-parametric analysis of relative errors allowed us to conclude that the accuracy of the investigated RI gauges is generally high, after assuming that it should be at least contained within the limits set forth by WMO in this respect. The measuring principle exploited by the instrument is generally not very decisive in obtaining such good results in the laboratory. Rather, the attention paid by the manufacturer to suitably accounting and correcting for systematic errors and time-constant related effects was demonstrated to be influential. The analysis of precision showed that the observed frequency distribution of relative errors around their mean value is not indicative of an underlying Gaussian population, being much more peaked in most cases than can be expected from samples extracted from a Gaussian distribution. The analysis of variance (one-way ANOVA), assuming the instrument model as the only potentially affecting factor, does not confirm the hypothesis of a single common underlying distribution for all instruments. Pair-wise multiple comparison analysis revealed cases in which significant differences could be observed.
NASA Astrophysics Data System (ADS)
Giron Palomares, Jose Benjamin; Hsieh, Sheng-Jen
2014-05-01
A methodology based on active infrared thermography to study and characterize hidden solder joint shapes on a multi cover PCB assembly was investigated. A numerical model was developed to simulate the active thermography methodology and was proven to determine the grand average cooling rates with maximum errors of 8.85% (one cover) and 13.36% (two covers). A parametric analysis was performed by varying the number of covers, heat flux provided, and the amount of heating time. Grand average cooling rate distances among contiguous solder joint shapes, as well as solder joints discriminability, were determined to be directly proportional to heat flux, and inversely proportional to the number of covers and heating time. Finally, a mathematical model was developed to determine the appropriate total amount of energy needed to discriminate among hidden solder joints with a "good" discriminability for one and two covers, and a "regular" discriminability for up to five covers. The mathematical model was proven to predict the total amount of energy to achieve a "good" discriminability for one cover within a 10% of error with respect to the experimental active thermography model.
Multilevel Latent Class Analysis: Parametric and Nonparametric Models
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2014-01-01
Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…
Multilevel Latent Class Analysis: Parametric and Nonparametric Models
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2014-01-01
Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…
Xu, J L; Prorok, P C
1995-12-30
The goal of screening programmes for cancer is early detection and treatment with a consequent reduction in mortality from the disease. Screening programmes need to assess the true benefit of screening, that is, the length of time of extension of survival beyond the time of advancement of diagnosis (lead-time). This paper presents a non-parametric method to estimate the survival function of the post-lead-time survival (or extra survival time) of screen-detected cancer cases based on the observed total life time, namely, the sum of the lead-time and the extra survival time. We apply the method to the well-known data set of the HIP (Health Insurance Plan of Greater New York) breast cancer screening study. We make comparisons with the survival of other groups of cancer cases not detected by screening such as interval cases, cases among individuals who refused screening, and randomized control cases. As compared with Walter and Stitt's model, in which they made parametric assumptions for the extra survival time, our non-parametric method provides a better fit to HIP data in the sense that our estimator for the total survival time has a smaller sum of squares of residuals.
Parametric dynamic analysis of a superconducting bearing system
NASA Astrophysics Data System (ADS)
Cansiz, A.; Hasar, U. C.; Gundogdu, Ö.; Ates Çam, B.
2009-03-01
The dynamics of a disk-shaped permanent-magnet rotor levitated over a high-temperature superconductor is studied. The interaction between the rotor magnet and the superconductor is modelled by assuming the magnet to be a magnetic dipole and the superconductor as a diamagnetic material. In the magneto-mechanical analysis of the superconductor part, the frozen image concept is combined with the diamagnetic image and the damping in the system was neglected. The interaction potential of the system is the combination of magnetic and gravitational potential. From the dynamical analysis, the equations of motion of the permanent magnet are stated as a function of lateral, vertical and tilt directions. The vibration behaviour of the permanent magnet is analyzed with a numerical calculation obtained by the non-dimensionalized differential equations for small initial impulses.
Bifurcation analysis of parametrically excited bipolar disorder model
NASA Astrophysics Data System (ADS)
Nana, Laurent
2009-02-01
Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.
Multi-parametric imaging of cell heterogeneity in apoptosis analysis.
Vorobjev, Ivan A; Barteneva, Natasha S
2017-01-01
Apoptosis is a multistep process of programmed cell death where different morphological and molecular events occur simultaneously and/or consequently. Recent progress in programmed cell death analysis uncovered large heterogeneity in response of individual cells to the apoptotic stimuli. Analysis of the complex and dynamic process of apoptosis requires a capacity to quantitate multiparametric data obtained from multicolor labeling and/or fluorescent reporters of live cells in conjunction with morphological analysis. Modern methods of multiparametric apoptosis study include but are not limited to fluorescent microscopy, flow cytometry and imaging flow cytometry. In the current review we discuss the image-based evaluation of apoptosis on the single-cell and population level by imaging flow cytometry in parallel with other techniques. The advantage of imaging flow cytometry is its ability to interrogate multiparametric morphometric and fluorescence quantitative data in statistically robust manner. Here we describe the current status and future perspectives of this emerging field, as well as some challenges and limitations. We also highlight a number of assays and multicolor labeling probes, utilizing both microscopy and different variants of imaging cytometry, including commonly based assays and novel developments in the field. Copyright © 2016. Published by Elsevier Inc.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
Parametric analysis of a shape memory alloy actuated arm
NASA Astrophysics Data System (ADS)
Wright, Cody; Bilgen, Onur
2016-04-01
Using a pair of antagonistic Shape Memory Allow (SMA) wires, it may be possible to produce a mechanism that replicates human musculoskeletal movement. The movement of interest is the articulation of the elbow joint actuated by the biceps brachii muscle. In an effort to understand the bio-mechanics of the arm, a single degree of freedom crankslider mechanism is used to model the movement of the arm induced by the biceps brachii muscle. First, a purely kinematical analysis is performed on a rigid body crank-slider. Force analysis is also done modeling the muscle as a simple linear spring. Torque, rocking angle, and energy are calculated for a range of crank-slider geometries. The SMA wire characteristics are experimentally determined for the martensite detwinned and full austenite phases. Using the experimental data, an idealized actuator characteristic curve is produced for the SMA wire. Kinematic and force analyses are performed on the nonlinear wire characteristic curve and a linearized wire curve; both cases are applied to the crankslider mechanism. Performance metrics for both cases are compared, followed by discussion.
Parametric sensitivity analysis of avian pancreatic polypeptide (APP).
Zhang, H; Wong, C F; Thacher, T; Rabitz, H
1995-10-01
Computer simulations utilizing a classical force field have been widely used to study biomolecular properties. It is important to identify the key force field parameters or structural groups controlling the molecular properties. In the present paper the sensitivity analysis method is applied to study how various partial charges and solvation parameters affect the equilibrium structure and free energy of avian pancreatic polypeptide (APP). The general shape of APP is characterized by its three principal moments of inertia. A molecular dynamics simulation of APP was carried out with the OPLS/Amber force field and a continuum model of solvation energy. The analysis pinpoints the parameters which have the largest (or smallest) impact on the protein equilibrium structure (i.e., the moments of inertia) or free energy. A display of the protein with its atoms colored according to their sensitivities illustrates the patterns of the interactions responsible for the protein stability. The results suggest that the electrostatic interactions play a more dominant role in protein stability than the part of the solvation effect modeled by the atomic solvation parameters.
A parametric approach to hover balance analysis of two STOVL fighter concepts
NASA Technical Reports Server (NTRS)
Samuels, Jeffrey J.; Hahn, Andrew S.
1986-01-01
Successful development of an aircraft with vertical landing capability must address the critical problem of balancing the aircraft for hover. In this paper a parametric method for balancing short takeoff and vertical landing (STOVL) aircraft in hover is described and applied to the analysis of two conceptual STOVL fighters. One uses a remote augmented lift system and the other a thrust-vectoring hybrid tandem-fan engine.
NASA Astrophysics Data System (ADS)
Shin, Kyung-Hun; Park, Hyung-Il; Cho, Han-Wook; Choi, Jang-Young
2017-05-01
This paper presents the torque calculation and parametric analysis of a coaxial magnetic gear (CMG). We obtained analytical magnetic field solutions produced by permanent magnets based on a magnetic vector potential. Then, the analytical solutions for magnetic torque were obtained. All analytical results were extensively validated with nonlinear two-dimensional finite element analysis. Finally, using the derived analytical magnetic torque solutions, we carried out parametric analysis to determine the influence of the design parameters on the CMG's behavior.
Parametric analysis of thermal preference following sleep deprivation in the rat.
Harvey, Mark T; Kline, Robert H; May, Michael E; Roberts, A Celeste; Valdovinos, Maria G; Wiley, Ronald G; Kennedy, Craig H
2010-11-19
A thermal preference task was used to assess the effects of sleep deprivation on nociceptive behavior using hot and cool stimuli. The thermal preference apparatus allowed male rats to move freely from a hot thermal plate (44.7°C) to an adjacent plate at neutral (33.5°C) or cold temperatures (1.3-11°C). Investigators recorded occupancy on the colder side, frequency of movements between the 2 compartments, and first escape latency from the cold side. Parametric analysis of thermal preference indicated that behavioral allocation was related to temperature ranges previously associated with activation of thermal nociceptors. A 50% occupancy rate was determined from a stimulus-response function identifying 1.3°C vs. 44.7°C as optimal temperatures. This temperature combination was then used to test the effects of sleep deprivation for 48h using the pedestal-over-water method on response allocation to the 2 temperature zones. Sleep deprivation decreased time spent on the cooled plate. Cumulative occupancy indicated differential effects for sleep deprivation with the rats preferring to remain on the hot side vs. the cold side, suggesting that sleep deprivation increased the nociceptive properties of the cold stimulus.
Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics
NASA Astrophysics Data System (ADS)
Tsourtis, Anastasios; Pantazis, Yannis; Katsoulakis, Markos A.; Harmandaris, Vagelis
2015-07-01
In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.
Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics
Tsourtis, Anastasios; Pantazis, Yannis Katsoulakis, Markos A.; Harmandaris, Vagelis
2015-07-07
In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.
Parametric analysis of thermal stratification during the Monju turbine trip test
Sofu, T.
2012-07-01
CFD-based simulation techniques are evaluated using a simplified symmetric Monju model to study multi-dimensional mixing and heat transfer in the upper plenum during a turbine trip test. When the test starts and core outlet temperatures drop due to reactor shutdown, the cooler sodium is trapped near the bottom of the vessel and the hotter (less dense) primary sodium at the higher elevations stays largely stagnant for an extended period of time inhibiting natural circulation. However, the secondary flow through a set of holes on the inner barrel bypasses the thermally stratified region as a shorter path to the intermediate heat exchanger and improves the natural circulation flow rate to cool the core. The calculations with strict adherence to benchmark specifications predict a much shorter duration for thermal stratification in the upper plenum than the experimental data indicates. In this paper, the results of a parametric analysis are presented to address this discrepancy. Specifically, the role of the holes on the inner barrel is reassessed in terms of their ability to provide larger by-pass flow. Assuming inner barrel holes with rounded edge produces results more consistent with the experiments. (authors)
Parametric analysis of performance and design characteristics for advanced earth-to-orbit shuttles
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.; Strack, W. C.; Padrutt, J. A.
1972-01-01
Performance, trajectory, and design characteristics are presented for (1) a single-stage shuttle with a single advanced rocket engine, (2) a single-stage shuttle with an initial parallel chemical engine and advanced engine burn followed by an advanced engine sustainer burn, (3) a single-stage shuttle with an initial chemical engine burn followed by an advanced engine burn, and (4) a two-stage shuttle with a chemical propulsion booster stage and an advanced propulsion upper stage. The ascent trajectory profile includes a brief initial vertical rise; zero-lift flight through the sensible atmosphere; variational steering into an 83-kilometer by 185-kilometer intermediate orbit; and a fixed, 460-meter per second allowance for subsequent maneuvers. Results are given in terms of burnout mass fractions (including structure and payload), trajectory profiles, propellant loadings, and burn times. These results are generated with a trajectory analysis that includes a parametric variation of the specific impulse from 800 to 3000 seconds and the specific engine weight from 0 to 1.0.
Parametric analysis of the end face engagement worm gear
NASA Astrophysics Data System (ADS)
Deng, Xingqiao; Wang, Jueling; Wang, Jinge; Chen, Shouan; Yang, Jie
2015-11-01
A novel specific type of worm drive, so-called end face engagement worm gear(EFEWD), is originally presented to minimize or overcome the gear backlash. Different factors, including the three different types, contact curves, tooth profile, lubrication angle and the induced normal curvature are taken into account to investigate the meshing characteristics and create the profile of a novel specific type of worm drive through mathematical models and theoretical analysis. The tooth of the worm wheel is very specific with the sine-shaped tooth which is located at the alveolus of the worm and the tooth profile of a worm is generated by the meshing movement of the worm wheel with the sine-shaped tooth, but just the end face of the worm(with three different typical meshing types) is adapted to meshing, and therefore an extraordinary manufacturing methods is used to generate the profile of the end face engagement worm. The research results indicates that the bearing contacts of the generated conjugate hourglass worm gear set are in line contacts, with certain advantages of no-backlash, high precision and high operating efficiency over other gears and gear systems besides the end face engagement worm gear drive may improve bearing contact, reduce the level of transmission errors and lessen the sensitivity to errors of alignment. Also, the end face engagement worm can be easily made with superior meshing and lubrication performance compared with the conventional techniques. In particular, the meshing and lubrication performance of the end face engagement worm gear by using the end face to meshing can be increased over 10% and 7%, respectively. This investigate is expect to provide a new insight on the design of the future no-backlash worm drive for industry.
Thermal hydraulic limits analysis using statistical propagation of parametric uncertainties
Chiang, K. Y.; Hu, L. W.; Forget, B.
2012-07-01
The MIT Research Reactor (MITR) is evaluating the conversion from highly enriched uranium (HEU) to low enrichment uranium (LEU) fuel. In addition to the fuel element re-design, a reactor power upgraded from 6 MW to 7 MW is proposed in order to maintain the same reactor performance of the HEU core. Previous approach in analyzing the impact of engineering uncertainties on thermal hydraulic limits via the use of engineering hot channel factors (EHCFs) was unable to explicitly quantify the uncertainty and confidence level in reactor parameters. The objective of this study is to develop a methodology for MITR thermal hydraulic limits analysis by statistically combining engineering uncertainties with an aim to eliminate unnecessary conservatism inherent in traditional analyses. This method was employed to analyze the Limiting Safety System Settings (LSSS) for the MITR, which is the avoidance of the onset of nucleate boiling (ONB). Key parameters, such as coolant channel tolerances and heat transfer coefficients, were considered as normal distributions using Oracle Crystal Ball to calculate ONB. The LSSS power is determined with 99.7% confidence level. The LSSS power calculated using this new methodology is 9.1 MW, based on core outlet coolant temperature of 60 deg. C, and primary coolant flow rate of 1800 gpm, compared to 8.3 MW obtained from the analytical method using the EHCFs with same operating conditions. The same methodology was also used to calculate the safety limit (SL) for the MITR, conservatively determined using onset of flow instability (OFI) as the criterion, to verify that adequate safety margin exists between LSSS and SL. The calculated SL is 10.6 MW, which is 1.5 MW higher than LSSS. (authors)
Red blood cells ageing markers: a multi-parametric analysis
Bardyn, Manon; Rappaz, Benjamin; Jaferzadeh, Keyvan; Crettaz, David; Tissot, Jean-Daniel; Moon, Inkyu; Turcatti, Gerardo; Lion, Niels; Prudent, Michel
2017-01-01
Background Red blood cells collected in citrate-phosphate-dextrose can be stored for up to 42 days at 4 °C in saline-adenine-glucose-mannitol additive solution. During this controlled, but nevertheless artificial, ex vivo ageing, red blood cells accumulate lesions that can be reversible or irreversible upon transfusion. The aim of the present study is to follow several parameters reflecting cell metabolism, antioxidant defences, morphology and membrane dynamics during storage. Materials and methods Five erythrocyte concentrates were followed weekly during 71 days. Extracellular glucose and lactate concentrations, total antioxidant power, as well as reduced and oxidised intracellular glutathione levels were quantified. Microvesiculation, percentage of haemolysis and haematologic parameters were also evaluated. Finally, morphological changes and membrane fluctuations were recorded using label-free digital holographic microscopy. Results The antioxidant power as well as the intracellular glutathione concentration first increased, reaching maximal values after one and two weeks, respectively. Irreversible morphological lesions appeared during week 5, where discocytes began to transform into transient echinocytes and finally spherocytes. At the same time, the microvesiculation and haemolysis started to rise exponentially. After six weeks (expiration date), intracellular glutathione was reduced by 25%, reflecting increasing oxidative stress. The membrane fluctuations showed decreased amplitudes during shape transition from discocytes to spherocytes. Discussion Various types of lesions accumulated at different chemical and cellular levels during storage, which could impact their in vivo recovery after transfusion. A marked effect was observed after four weeks of storage, which corroborates recent clinical data. The prolonged follow-up period allowed the capture of deep storage lesions. Interestingly, and as previously described, the severity of the changes differed among
Semi-parametric analysis of dynamic contrast-enhanced MRI using Bayesian P-splines.
Schmid, Volker J; Whitcher, Brandon; Yang, Guang-Zhong
2006-01-01
Current approaches to quantitative analysis of DCE-MRI with non-linear models involve the convolution of an arterial input function (AIF) with the contrast agent concentration at a voxel or regional level. Full quantification provides meaningful biological parameters but is complicated by the issues related to convergence, (de-)convolution of the AIF, and goodness of fit. To overcome these problems, this paper presents a penalized spline smoothing approach to model the data in a semi-parametric way. With this method, the AIF is convolved with a set of B-splines to produce the design matrix, and modeling of the resulting deconvolved biological parameters is obtained in a way that is similar to the parametric models. Further kinetic parameters are obtained by fitting a non-linear model to the estimated response function and detailed validation of the method, both with simulated and in vivo data is
Parametric analysis of a predator-prey system stabilized by a top predator.
Morozov, Andrew Y; Li, Bai-Lian
2006-08-01
We present a complete parametric analysis of a predator-prey system influenced by a top predator. We study ecosystems with abundant nutrient supply for the prey where the prey multiplication can be considered as proportional to its density. The main questions we examine are the following: (1) Can the top predator stabilize such a system at low densities of prey? (2) What possible dynamic behaviors can occur? (3) Under which conditions can the top predation result in the system stabilization? We use a system of two nonlinear ordinary differential equations with the density of the top predator as a parameter. The model is investigated with methods of qualitative theory of ODEs and the theory of bifurcations. The existence of 12 qualitatively different types of dynamics and complex structure of the parametric space are demonstrated. Our studies of phase portraits and parametric diagrams show that a top predator can be an important factor leading to stabilization of the predator-prey system with abundant nutrient supply. Although the model here is applied to the plankton communities with fish (or carnivorous zooplankton) as the top trophic level, the general form of the equations allows applications of our results to other ecological systems.
Parametric three-way receiver operating characteristic surface analysis using mathematica.
Heckerling, P S
2001-01-01
Three-way receiver operating characteristic (ROC) surface analysis involves the calculation of a volume under an ROC surface (VUS), which is a measure of discriminatory accuracy of 2 diagnostic tests for 3 diseases. Nonparametric methods for calculating VUS and its standard error have been developed. The author presents the code for roc3D, a Mathematica computer program for performing parametric ROC surface analysis. roc3D calculates VUS assuming a multinormal distribution of test results in the 3 diseased populations, provides user-specified pointwise confidence limits for VUS, and displays a 3-dimensional plot of the ROC surface. Limitations of the roc3D program are discussed.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Choi, Hongyoon; Yoon, Hai-jeon; Kim, Tae Sung; Oh, Jae Hwan; Kim, Dae Yong
2013-01-01
Introduction 18F-Fluorodeoxyglucose (18F-FDG) PET/computed tomography (CT) has been used for evaluation of the response of rectal cancer to neoadjuvant chemoradiotherapy (CRT), but differentiating residual tumor from post-treatment changes remains a problem. We propose a voxel-based dual-time 18F-FDG PET parametric imaging technique for the evaluation of residual rectal cancer after CRT. Materials and methods Eighty-six patients with locally advanced rectal cancer who underwent neoadjuvant CRT between March 2009 and February 2011 were selected retrospectively. Standard 60-min postinjection PET/CT scans followed by 90-min delayed images were coregistered by rigid-body transformation. A dual-time parametric image was generated, which divided delayed standardized uptake value (SUV) by 60-min SUV on a voxel-by-voxel basis. Maximum delayed-to-standard SUV ratios (DSR) measured on the parametric images as well as the percentage of SUV decrease from pre-CRT to post-CRT scans (pre/post-CRT response index) were obtained for each tumor and correlated with pathologic response classified by the Dworak tumor regression grade (TRG). Results With respect to the false-positive lesions in the nine post-CRT patients with false-positive standard 18F-FDG scans in case groups who responded to therapy (TRG 3 or 4 tumors), eight were undetectable on dual-time parametric images (P<0.05). The maximum DSR showed significantly higher accuracy for identification of tumor regression compared with the pre/post-CRT response index in receiver-operating characteristic analysis (P<0.01). With a 1.25 cutoff value for the maximum DSR, 85.0% sensitivity, 95.5% specificity, and 93.0% overall accuracy were obtained for identification of good response. Conclusion A voxel-based dual-time parametric imaging technique for evaluation of post-CRT rectal cancer holds promise for differentiating residual tumor from treatment-related nonspecific 18F-FDG uptake. PMID:24128896
NASA Astrophysics Data System (ADS)
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Space-time chaos of capillary waves parametrically excited by noise
Ezerskii, A.B.; Matusov, P.A.
1995-04-01
In experiments with parametrically excited capillary waves on the surface of a liquid with random pumping it was found that the transition to an irregular wave field starts to occur at a low supercriticality. Pulsations in the intensity of the ripples with almost complete preservation of the wave-field structure were noticed. The characteristic frequency of the pulsations calculated from experimental samples turned out to depend on the supercriticality. The results of numerical modeling were in good agreement with experiment.
NASA Astrophysics Data System (ADS)
Hongwu, Zhang; Xinwei, Zhang
2002-12-01
The objective of the paper is to develop a new algorithm for numerical solution of dynamic elastic-plastic strain hardening/softening problems. The gradient dependent model is adopted in the numerical model to overcome the result mesh-sensitivity problem in the dynamic strain softening or strain localization analysis. The equations for the dynamic elastic-plastic problems are derived in terms of the parametric variational principle, which is valid for associated, non-associated and strain softening plastic constitutive models in the finite element analysis. The precise integration method, which has been widely used for discretization in time domain of the linear problems, is introduced for the solution of dynamic nonlinear equations. The new algorithm proposed is based on the combination of the parametric quadratic programming method and the precise integration method and has all the advantages in both of the algorithms. Results of numerical examples demonstrate not only the validity, but also the advantages of the algorithm proposed for the numerical solution of nonlinear dynamic problems.
Zerzucha, Piotr; Boguszewska, Dominika; Zagdańska, Barbara; Walczak, Beata
2012-03-16
Spot detection is a mandatory step in all available software packages dedicated to the analysis of 2D gel images. As the majority of spots do not represent individual proteins, spot detection can obscure the results of data analysis significantly. This problem can be overcome by a pixel-level analysis of 2D images. Differences between the spot and the pixel-level approaches are demonstrated by variance analysis for real data sets (part of a larger research project initiated to investigate the molecular mechanism of the response of the potato to drought stress). As the method of choice for the analysis of data variation, the non-parametric MANOVA was chosen. NP-MANOVA is recommended as a flexible and very fast tool for the evaluation of the statistical significance of the factor(s) studied.
Near-field heat transfer between graphene monolayers: Dispersion relation and parametric analysis
NASA Astrophysics Data System (ADS)
Yin, Ge; Yang, Jiang; Ma, Yungui
2016-12-01
Plasmon polaritons in graphene can enhance near-field heat transfer. In this work, we give a complete parametric analysis on the near-field heat transfer between two graphene monolayers that allows transfer efficiencies several orders-of-magnitude larger than blackbody radiation. Influences of major parameters are conclusively clarified from the changes of the interlayer supermode coupling and their dispersion relations. The method to maximize the near-field heat flux is discussed. The generalized Stefan-Boltzmann formula is proposed to describe the near-field heat transfer dominated by evanescent wave tunneling. Our results are of practical significance in guiding the design of thermal management systems.
NASA Astrophysics Data System (ADS)
Lobach, I.; Benediktovitch, A.
2016-07-01
The possibility of quantitative texture analysis by means of parametric x-ray radiation (PXR) from relativistic electrons with Lorentz factor γ > 50MeV in a polycrystal is considered theoretically. In the case of rather smooth orientation distribution function (ODF) and large detector (θD >> 1/γ) the universal relation between ODF and intensity distribution is presented. It is shown that if ODF is independent on one from Euler angles, then the texture is fully determined by angular intensity distribution. Application of the method to the simulated data shows the stability of the proposed algorithm.
Santos, Gabriela Lopes; Russo, Thiago Luiz; Nieuwenhuys, Angela; Monari, Davide; Desloovere, Kaat
2017-09-19
To compare sitting posture and movement strategies between chronic hemiparetic and healthy subjects while performing a drinking task using Statistical Parametric Mapping (SPM) and feature analysis. Cross-sectional study. Department of Physical Therapy of University. Thirteen chronic hemiparetic and thirteen healthy individuals matched for gender and age. Not applicable. Drinking task was divided into phases: reaching, transporting the glass to mouth, transporting the glass to table, returning to initial position. SPM two-sample t test was used to compare the entire kinematic waveforms of different joint angles (trunk, scapulothoracic, humerothoracic, elbow). Joint angles at the beginning and end of the motion, movement time, peak velocity timing, trajectory deviation, normalized integrated jerk and range of motion were extracted from the motion data. Group differences for these parameters were analyzed using independent t-tests. At the static posture and beginning of the reaching phase, patients showed a shoulder position more deviated from the midline and externally rotated with increased scapula protraction, medial rotation, anterior tilting, trunk anterior flexion and inclination to the paretic side. Altered spatiotemporal variables throughout the task were found in all phases, except for the returning phase. Patients returned to a similar posture as the task onset, except for scapula, which was normalized after the reaching phase. Chronic hemiparetic subjects showed more deviations in the proximal joints during seated posture and reaching. However, the scapular movement drew nearer the healthy individuals patterns after the first phase, showing an interesting point to consider in rehabilitation programs. Copyright © 2017. Published by Elsevier Inc.
An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1996-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Lau, Bryan; Cole, Stephen R.; Gange, Stephen J.
2010-01-01
In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event. PMID:21337360
Zhang, Lei; Yang, Sigang; Li, Pengxiao; Wang, Xiaojian; Gou, Doudou; Chen, Wei; Luo, Wenyong; Chen, Hongwei; Chen, Minghua; Xie, Shizhong
2013-10-21
We report the experimental demonstration of a fully fiber-integrated picosecond optical parametric oscillator. The gain is provided by a 50-meters homemade photonic crystal fiber in the ring cavity. A time-dispersion-tuned technique is used to allow the oscillator to select the oscillating wavelength adaptively and synchronize with the pump pulse train. The output wavelength of the oscillator can be continuously tuned from 988 to 1046 nm and from 1085 to 1151 nm by adjusting the pump wavelength and the time-dispersion-tuned technique simultaneously.
NASA Astrophysics Data System (ADS)
Makeeva, G. S.; Golovanov, O. A.; Kouzaev, G. A.
2017-07-01
A rigorous mathematical model for graphene-based parametric devices based on the Maxwell`s equations, where the graphene surface conductivity is determined as the nonlinear function on the electric field intensity, is developed. The projection method is applied to solve the 2D nonlinear diffraction boundary problem. Using the computational algorithm based on autonomous blocks with the Floquet channels, a parametric THz device based on a multilayer graphene-dielectric nanostructure is numerically modeled. The numerical analysis shows that this graphene-based device, simulated here at a 14.15 THz, can demonstrate the parametric amplification of signals. The instability regions of the parametric generation in this tunable THz device are calculated depending on the magnitude and frequency of the pumping TEM-wave by computing the determinant of an obtained system of linearized equations.
Performance evaluation and parametric analysis on cantilevered ramp injector in supersonic flows
NASA Astrophysics Data System (ADS)
Huang, Wei; Li, Shi-bin; Yan, Li; Wang, Zhen-guo
2013-03-01
The cantilevered ramp injector is one of the most promising candidates for the mixing enhancement between the fuel and the supersonic air, and its parametric analysis has drawn an increasing attention of researchers. The flow field characteristics and the drag force of the cantilevered ramp injector in the supersonic flow with the freestream Mach number 2.0 have been investigated numerically, and the predicted injectant mole fraction and static pressure profiles have been compared with the available experimental data in the open literature. At the same time, the grid independency analysis has been performed by using the coarse, the moderate and the refined grid scales, and the influence of the turbulence model on the flow field of the cantilevered ramp injector has been carried on as well. Further, the effects of the swept angle, the ramp angle and the length of the step on the performance of the cantilevered ramp injector have been discussed subsequently. The obtained results show that the grid scale has only a slight impact on the flow field of the cantilevered ramp injector except in the region near the fuel injector, and the predicted results show reasonable agreement with the experimental data. Additionally, the turbulence model makes a slight difference to the numerical results, and the results obtained by the RNG k-ɛ and SST k-ω turbulence models are almost the same. The swept angle and the ramp angle have the same impact on the performance of the cantilevered ramp injector, and the kidney-shaped plume is formed with shorter distance with the increase of the swept and ramp angles. At the same time, the shape of the injectant mole fraction contour at X/H=6 goes through a transition from a peach-shaped plume to a kidney-shaped plume, and the cantilevered ramp injector with larger swept and ramp angles has the higher mixing efficiency and the larger drag force. The length of the step has only a slight impact on the drag force performance of the cantilevered
Pouillot, Régis; Lubran, Meryl B; Cates, Sheryl C; Dennis, Sherri
2010-02-01
Home refrigeration temperatures and product storage times are important factors for controlling the growth of Listeria monocytogenes in refrigerated ready-to-eat foods. In 2005, RTI International, in collaboration with Tennessee State University and Kansas State University, conducted a national survey of U.S. adults to characterize consumers' home storage and refrigeration practices for 10 different categories of refrigerated ready-to-eat foods. No distributions of storage time or refrigeration temperature were presented in any of the resulting publications. This study used classical parametric survival modeling to derive parametric distributions from the RTI International storage practices data set. Depending on the food category, variability in product storage times was best modeled using either exponential or Weibull distributions. The shape and scale of the distributions varied greatly depending on the food category. Moreover, the results indicated that consumers tend to keep a product that is packaged by a manufacturer for a longer period of time than a product that is packaged at retail. Refrigeration temperatures were comparable to those previously reported, with the variability in temperatures best fit using a Laplace distribution, as an alternative to the empirical distribution. In contrast to previous research, limited support was found for a correlation between storage time and temperature. The distributions provided in this study can be used to better model consumer behavior in future risk assessments.
Parametric performance analysis of OTEC system using HFC32/HFC134a mixtures
Uehara, Haruo; Ikegami, Yasuyuki
1995-11-01
Parametric performance analysis is performed on an Ocean Thermal Energy Conversion (OTEC) system using HFC32/HFC134a mixtures as working fluid. The analyzed OTEC system uses the Kalina cycle. The parameters in the performance analysis consist of the warm sea water inlet temperature, the cold sea water inlet temperature, the heat transfer performance of the evaporator, condenser and regenerator, the turbine inlet pressure, the turbine inlet temperature, the molar fraction of HFC32. Effects of these various parameters on the efficiency of the Kalina cycle using HFC32/HFC134a mixtures are clarified by using this analysis, and compared with calculation results using ammonia/water mixtures as working fluid. The thermal efficiency of OTEC system using the Kalina cycle can reach up to about 5 percent with an inlet warm sea water temperature of 28 C and an inlet cold sea water temperature of 4 C.
A Semi-parametric Bayesian Approach for Differential Expression Analysis of RNA-seq Data.
Liu, Fangfang; Wang, Chong; Liu, Peng
2015-12-01
RNA-sequencing (RNA-seq) technologies have revolutionized the way agricultural biologists study gene expression as well as generated a tremendous amount of data waiting for analysis. Detecting differentially expressed genes is one of the fundamental steps in RNA-seq data analysis. In this paper, we model the count data from RNA-seq experiments with a Poisson-Gamma hierarchical model, or equivalently, a negative binomial (NB) model. We derive a semi-parametric Bayesian approach with a Dirichlet process as the prior model for the distribution of fold changes between the two treatment means. An inference strategy using Gibbs algorithm is developed for differential expression analysis. The results of several simulation studies show that our proposed method outperforms other methods including the popularly applied edgeR and DESeq methods. We also discuss an application of our method to a dataset that compares gene expression between bundle sheath and mesophyll cells in maize leaves.
A bifurcation analysis of boiling water reactor on large domain of parametric spaces
NASA Astrophysics Data System (ADS)
Pandey, Vikas; Singh, Suneet
2016-09-01
The boiling water reactors (BWRs) are inherently nonlinear physical system, as any other physical system. The reactivity feedback, which is caused by both moderator density and temperature, allows several effects reflecting the nonlinear behavior of the system. Stability analyses of BWR is done with a simplified, reduced order model, which couples point reactor kinetics with thermal hydraulics of the reactor core. The linear stability analysis of the BWR for steady states shows that at a critical value of bifurcation parameter (i.e. feedback gain), Hopf bifurcation occurs. These stable and unstable domains of parametric spaces cannot be predicted by linear stability analysis because the stability of system does not include only stability of the steady states. The stability of other dynamics of the system such as limit cycles must be included in study of stability. The nonlinear stability analysis (i.e. bifurcation analysis) becomes an indispensable component of stability analysis in this scenario. Hopf bifurcation, which occur with one free parameter, is studied here and it formulates birth of limit cycles. The excitation of these limit cycles makes the system bistable in the case of subcritical bifurcation whereas stable limit cycles continues in an unstable region for supercritical bifurcation. The distinction between subcritical and supercritical Hopf is done by two parameter analysis (i.e. codimension-2 bifurcation). In this scenario, Generalized Hopf bifurcation (GH) takes place, which separates sub and supercritical Hopf bifurcation. The various types of bifurcation such as limit point bifurcation of limit cycle (LPC), period doubling bifurcation of limit cycles (PD) and Neimark-Sacker bifurcation of limit cycles (NS) have been identified with the Floquet multipliers. The LPC manifests itself as the region of bistability whereas chaotic region exist because of cascading of PD. This region of bistability and chaotic solutions are drawn on the various
Global analysis of parametric sensitivity of precipitation in the Community Atmosphere Model (CAM5)
NASA Astrophysics Data System (ADS)
Qian, Y.; Yan, H.; Zhao, C.; Hou, Z.; Wang, H.; Rasch, P. J.; Klein, S. A.; Lucas, D.; Tannahill, J.
2013-12-01
In this study, we investigate the sensitivity of precipitation characteristics, including mean, extreme and diurnal cycle, to dozens of uncertain parameters mainly related to cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube sampling and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations (1356 in total). The CAM5 ensemble simulates the mean precipitation reasonably well, but fails to capture the diurnal cycle of precipitation over land. The phase of diurnal precipitation associated with the convection propagation over Central US seems to be more related to model structural errors rather than the parametric uncertainties. Parametric calibration could possibly improve CAM5 precipitation over regions, such as Tropical Western Pacific, having relatively weak diurnal cycle and high model parameter identifiability. The precipitation variance is large and the diurnal cycle is strong over South America and Central Africa, where parametric calibration can possibly improve the model prediction of mean precipitation but not the diurnal cycle. Variance-based sensitivity analysis using a generalized linear model (GLM) is conducted to examine the relative contributions of individual parameter perturbations and their interactions to the global and regional precipitation. We characterize the global spatial distribution as well as scale (global vs. local) and seasonal dependence of parametric sensitivity of precipitation, and identify a few parameters that dominate the behavior of the mean, extremes or diurnal cycle of precipitation, respectively. Results suggest that the model-simulated precipitation is remarkably sensitive to a few cloud-related parameters, while aerosols have minor impact on the diurnal cycle of precipitation in the current CAM5. The interactions among the selected parameters contribute a relatively small portion to
NASA Astrophysics Data System (ADS)
Dodonov, V. V.; Valverde, C.; Souza, L. S.; Baseia, B.
2011-10-01
The exact Wigner function of a parametrically excited quantum oscillator in a phase-sensitive amplifying/attenuating reservoir is found for initial even/odd coherent states. Studying the evolution of negativity of the Wigner function we show the difference between the “initial positivization time” (IPT), which is inversely proportional to the square of the initial size of the superposition, and the “final positivization time” (FPT), which does not depend on this size. Both these times can be made arbitrarily long in maximally squeezed high-temperature reservoirs. Besides, we find the conditions when some (small) squeezing can exist even after the Wigner function becomes totally positive.
Parametric analysis of a cylindrical negative Poisson’s ratio structure
NASA Astrophysics Data System (ADS)
Wang, Yuanlong; Wang, Liangmo; Ma, Zheng-dong; Wang, Tao
2016-03-01
Much research related to negative Poisson’s ratio (NPR), or auxetic, structures is emerging these days. Several types of 3D NPR structure have been proposed and studied, but almost all of them had cuboid shapes, which were not suitable for certain engineering applications. In this paper, a cylindrical NPR structure was developed and researched. It was expected to be utilized in springs, bumpers, dampers and other similar applications. For the purpose of parametric analysis, a method of parametric modeling of cylindrical NPR structures was developed using MATLAB scripts. The scripts can automatically establish finite element models, invoke ABAQUS, read results etc. Subsequently the influences of structural parameters, including number of cells, number of layers and layer heights, on the uniaxial compression behavior of cylinder NPR structures were researched. This led to the conclusion that the stiffness of the cylindrical NPR structure was enhanced on increasing the number of cells and reducing the effective layer height. Moreover, small numbers of layers resulted in a late transition area of the load-displacement curve from low stiffness to high stiffness. Moreover, the middle contraction regions were more apparent with larger numbers of cells, smaller numbers of layers and smaller effective layer heights. The results indicate that the structural parameters had significant effects on the load-displacement curves and deformed shapes of cylindrical NPR structures. This paper is conducive to the further engineering applications of cylindrical NPR structures.
Parametric analysis of three dimensional flow models applied to tidal energy sites in Scotland
NASA Astrophysics Data System (ADS)
Rahman, Anas; Venugopal, Vengatesan
2017-04-01
This paper presents a detailed parametric analysis on various input parameters of two different numerical models, namely Telemac3D and Delft3D, used for the simulation of tidal current flow at potential tidal energy sites in the Pentland Firth in Scotland. The motivation behind this work is to investigate the influence of the input parameters on the above 3D models, as the majority of past research has mainly focused on using the 2D depth-averaged flow models for this region. An extended description of the models setup, along with the utilised parameters is provided. The International Hydrographic Organisation (IHO) tidal gauges and Acoustic Doppler and Current Profiler (ADCP) measurements are used in calibrating model output to ensure the robustness of the models. Extensive parametric study on the impact of varying drag coefficients, roughness formulae and turbulence models has been investigated and reported. The results indicate that both Telemac3D and Delft3D models are able to produce excellent comparison against measured data; however, with Delft3D, the model parameters which provided higher correlation with the measured data, are found to be different from those reported in the previous literature, which could be attributed to the choice of boundary conditions and the mesh size.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Time evolution from linear to nonlinear stages in magnetohydrodynamic parametric instabilities
NASA Technical Reports Server (NTRS)
Hoshino, M.; Goldstein, M. L.
1989-01-01
The nonlinear evolution of the magnetohydrodynamic (MHD) parametric instability of wave fluctuations propagating along an unperturbed magnetic field is investigated. Both a magnetohydrodynamic perturbation-theoretical approach and a nonlinear MHD simulation are used. It is shown that high harmonic waves are rapidly excited by wave-wave coupling, and that the wave spectrum evolves from a state containing a small number of degrees of freedom in k space to one which contains a large number of degrees of freedom. It is found that the spectral evolution prior to nonlinear saturation is well described by the prturbation theory. During this stage, the ratio of the growth rate of the nth harmonic wave to the linear growth rate of the fundamental wave is n. The nonlinear saturation stage is characterized by a frequency shift of the fundamental wave that destroys the wave-wave resonance condition which, in turn, causes the wave amplitude to cease its growth.
A parametric analysis of performance characteristics of satellite-borne multiple-beam antennas
NASA Technical Reports Server (NTRS)
Salmasi, A. B.
1980-01-01
An analytical and empirical model is presented for parametric study of multiple beam antenna frequency reuse capacity and interbeam isolation. Two types of reflector antennas, the axisymmetric parabolic and the offset-parabolic reflectors, are utilized to demonstrate the model. The parameters of the model are introduced and their limitations are discussed in the context of parabolic reflector antennas. The model, however, is not restricted to analysis of reflector antenna performance. Results of the analyses are covered in two tables. The model parameters, objectives, and descriptions are given, multiple-beam antenna frequency reuse capacity and interbeam isolation analysis of the two types of reflectors are discussed as well as future developments of the program model.
XU,J.; DEGRASSI,G.
2000-04-02
A comprehensive benchmark program was developed by Brookhaven National Laboratory (BNL) to perform an evaluation of state-of-the-art methods and computer programs for performing seismic analyses of coupled systems with non-classical damping. The program, which was sponsored by the US Nuclear Regulatory Commission (NRC), was designed to address various aspects of application and limitations of these state-of-the-art analysis methods to typical coupled nuclear power plant (NPP) structures with non-classical damping, and was carried out through analyses of a set of representative benchmark problems. One objective was to examine the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled systems. The examination was performed using parametric variations for three simple benchmark models. This paper presents the comparisons and evaluation of the program participants' results to the BNL exact solutions for the applicable ranges of modeling dynamic characteristic parameters.
Parametric distribution approach for flow availability in small hydro potential analysis
NASA Astrophysics Data System (ADS)
Abdullah, Samizee; Basri, Mohd Juhari Mat; Jamaluddin, Zahrul Zamri; Azrulhisham, Engku Ahmad; Othman, Jamel
2016-10-01
Small hydro system is one of the important sources of renewable energy and it has been recognized worldwide as clean energy sources. Small hydropower generation system uses the potential energy in flowing water to produce electricity is often questionable due to inconsistent and intermittent of power generated. Potential analysis of small hydro system which is mainly dependent on the availability of water requires the knowledge of water flow or stream flow distribution. This paper presented the possibility of applying Pearson system for stream flow availability distribution approximation in the small hydro system. By considering the stochastic nature of stream flow, the Pearson parametric distribution approximation was computed based on the significant characteristic of Pearson system applying direct correlation between the first four statistical moments of the distribution. The advantage of applying various statistical moments in small hydro potential analysis will have the ability to analyze the variation shapes of stream flow distribution.
Yaghotipoor, Anita; Farshadfar, E
2007-08-15
In order to determine phenotypic stability and contribution of yield components in the phenotypic stability of grain yield 21 genotypes of chickpea were evaluated in a randomized complete block design with three replications under rainfed and irrigated conditions in college of Agriculture, Razi University of Kermanshah, Iran, across 4 years. Non-parametric combined analysis of variance showed high significant differences for genotypes and genotype-environment interaction indicating the presence of genetic variation and possibility of selection for stable genotypes. The genotype number 8 (Filip92-9c) with minimum Si(2) and Si(2) of yield stability and grain yield in one parameter also revealed that genotype Filip92-9c was the most desirable variety for both yield and yield stability. Component analysis using Ci-value displayed that number of shrub per unit area has the most contribution on the grain yield phenotypic stability.
NASA Technical Reports Server (NTRS)
Jeffries, K. S.; Renz, D. D.
1984-01-01
A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm
NASA Technical Reports Server (NTRS)
Pappa, R. S.; Ibrahim, S. R.
1985-01-01
The accuracy of the Ibrahim time Domain (ITD) identification algorithm in extracting structural model parameters from free response functions was studied using computer simulated data for 65 positions on an isotropic, uniform thickness plate with mode shapes obtained by NASTRAN analysis. Natural frequencies were used to study identification results over ranges of modal parameter values and user selectable algorithm constants. Effects of superimposing various levels of noise onto the functions were investigated. No detrimental effects were observed when the number of computational degrees of freedom allowed in the algorithm was made many times larger than the minimum necessary for adequate identification. The use of a high number of degrees of freedom when analyzing experimental data, for the simultaneous identification of many modes in one computer run are suggested.
Phang, T.L.; Neville, M.C.; Rudolph, M.; Hunter, L.
2008-01-01
Trajectory clustering is a novel and statistically well-founded method for clustering time series data from gene expression arrays. Trajectory clustering uses non-parametric statistics and is hence not sensitive to the particular distributions underlying gene expression data. Each cluster is clearly defined in terms of direction of change of expression for successive time points (its ‘trajectory’), and therefore has easily appreciated biological meaning. Applying the method to a dataset from mouse mammary gland development, we demonstrate that it produces different clusters than Hierarchical, K-means, and Jackknife clustering methods, even when those methods are applied to differences between successive time points. Compared to all of the other methods, trajectory clustering was better able to match a manual clustering by a domain expert, and was better able to cluster groups of genes with known related functions. PMID:12603041
Dong, Liang; Tang, Wen Cheng
2014-01-01
This paper presents a method to model and design servo controllers for flexible ball screw drives with dynamic variations. A mathematical model describing the structural flexibility of the ball screw drive containing time-varying uncertainties and disturbances with unknown bounds is proposed. A mode-compensating adaptive backstepping sliding mode controller is designed to suppress the vibration. The time-varying uncertainties and disturbances represented in finite-term Fourier series can be estimated by updating the Fourier coefficients through function approximation technique. Adaptive laws are obtained from Lyapunov approach to guarantee the convergence and stability of the closed loop system. The simulation results indicate that the tracking accuracy is improved considerably with the proposed scheme when the time-varying parametric uncertainties and disturbances exist. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Ethanol production by enzymatic hydrolysis: parametric analysis of a base-case process
Isaacs, S.H.
1984-05-01
A base-case flowsheet for an enzymatic hydrolysis process is presented. Included is a parametric sensitivity analysis to identify key research issues and an assessment of this technology. The plant discussed is a large-scale facility, producing 50 million gallons of ethanol per year. The plant design is based on the process originally conceived by the US National Army Command and consists of these process steps: pretreatment; enzyme production; enzyme hydrolysis; fermentation; and distillation. The base-case design parameters are based on recent laboratory data from Lawrence Berkeley Laboratories and the University of California at Berkeley. The selling price of ethanol is used to compare variations in the base-case operating parameters, which include hydrolysis efficiencies, capital costs, enzyme production efficiencies, and enzyme recycle. 28 references, 38 figures, 8 tables.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
NASA Astrophysics Data System (ADS)
Kumar, Deepak; Kumar, Vivek; Singh, V. P.
2009-07-01
In the present paper, the effects of cake thickness and time on the efficiency of brown stock washer of the paper mill are studied by using mathematical model of pulp washing for the species of sodium and lignin ions. The mechanism of the diffusion- dispersion washing of the bed of the pulp fibers is mathematically modeled by the basic material balance and adsorption isotherm is used to describe the equilibrium between the concentration of the solute in the liquor and concentration of the solute on the fibers. To study the parametric effect, numerical solutions of the axial domain of the system governed by partial differential equations (transport and isotherm equations) for different boundary conditions are obtained by the "pdepe" solver in MATLAB source code. The effects of both the parameters are shown by three dimensional graphical representation as well as concentration profiles.
Scharfstein, Daniel; McDermott, Aidan; Díaz, Iván; Carone, Marco; Lunardon, Nicola; Turkoz, Ibrahim
2017-05-23
In practice, both testable and untestable assumptions are generally required to draw inference about the mean outcome measured at the final scheduled visit in a repeated measures study with drop-out. Scharfstein et al. (2014) proposed a sensitivity analysis methodology to determine the robustness of conclusions within a class of untestable assumptions. In their approach, the untestable and testable assumptions were guaranteed to be compatible; their testable assumptions were based on a fully parametric model for the distribution of the observable data. While convenient, these parametric assumptions have proven especially restrictive in empirical research. Here, we relax their distributional assumptions and provide a more flexible, semi-parametric approach. We illustrate our proposal in the context of a randomized trial for evaluating a treatment of schizoaffective disorder. © 2017, The International Biometric Society.
Martínez-Camblor, Pablo
2017-02-01
Meta-analyses, broadly defined as the quantitative review and synthesis of the results of related but independent comparable studies, allow to know the state of the art of one considered topic. Since the amount of available bibliography has enhanced in almost all fields and, specifically, in biomedical research, its popularity has drastically increased during the last decades. In particular, different methodologies have been developed in order to perform meta-analytic studies of diagnostic tests for both fixed- and random-effects models. From a parametric point of view, these techniques often compute a bivariate estimation for the sensitivity and the specificity by using only one threshold per included study. Frequently, an overall receiver operating characteristic curve based on a bivariate normal distribution is also provided. In this work, the author deals with the problem of estimating an overall receiver operating characteristic curve from a fully non-parametric approach when the data come from a meta-analysis study i.e. only certain information about the diagnostic capacity is available. Both fixed- and random-effects models are considered. In addition, the proposed methodology lets to use the information of all cut-off points available (not only one of them) in the selected original studies. The performance of the method is explored through Monte Carlo simulations. The observed results suggest that the proposed estimator is better than the reference one when the reported information is related to a threshold based on the Youden index and when information for two or more points are provided. Real data illustrations are included.
Penalized Likelihood for General Semi-Parametric Regression Models.
1985-05-01
should be stressed that q, while it may be somewhat less than n, will still be ’large’, and parametric estimation of £ will not be appropriate...Partial spline models for the semi- parametric estimation of functions of several variables, in Statistical Analysis of Time Series, Tokyo: Institute of
Castellani, Umberto; Cristani, Marco; Combi, Carlo; Murino, Vittorio; Sbarbati, Andrea; Marzola, Pasquina
2008-11-01
This paper presents Visual MRI, an innovative tool for the magnetic resonance imaging (MRI) analysis of tumoral tissues. The main goal of the analysis is to separate each magnetic resonance image in meaningful clusters, highlighting zones which are more probably related with the cancer evolution. Such non-invasive analysis serves to address novel cancer treatments, resulting in a less destabilizing and more effective type of therapy than the chemotherapy-based ones. The advancements brought by Visual MRI are two: first, it is an integration of effective information visualization (IV) techniques into a clustering framework, which separates each MRI image in a set of informative clusters; the second improvement relies in the clustering framework itself, which is derived from a recently re-discovered non-parametric grouping strategy, i.e., the mean shift. The proposed methodology merges visualization methods and data mining techniques, providing a computational framework that allows the physician to move effectively from the MRI image to the images displaying the derived parameter space. An unsupervised non-parametric clustering algorithm, derived from the mean shift paradigm, and called MRI-mean shift, is the novel data mining technique proposed here. The main underlying idea of such approach is that the parameter space is regarded as an empirical probability density function to estimate: the possible separate modes and their attraction basins represent separated clusters. The mean shift algorithm needs sensibility threshold values to be set, which could lead to highly different segmentation results. Usually, these values are set by hands. Here, with the MRI-mean shift algorithm, we propose a strategy based on a structured optimality criterion which faces effectively this issue, resulting in a completely unsupervised clustering framework. A linked brushing visualization technique is then used for representing clusters on the parameter space and on the MRI image
Generative pulsar timing analysis
NASA Astrophysics Data System (ADS)
Lentati, L.; Alexander, P.; Hobson, M. P.
2015-03-01
A new Bayesian method for the analysis of folded pulsar timing data is presented that allows for the simultaneous evaluation of evolution in the pulse profile in either frequency or time, along with the timing model and additional stochastic processes such as red spin noise, or dispersion measure variations. We model the pulse profiles using `shapelets' - a complete orthonormal set of basis functions that allow us to recreate any physical profile shape. Any evolution in the profiles can then be described as either an arbitrary number of independent profiles, or using some functional form. We perform simulations to compare this approach with established methods for pulsar timing analysis, and to demonstrate model selection between different evolutionary scenarios using the Bayesian evidence. The simplicity of our method allows for many possible extensions, such as including models for correlated noise in the pulse profile, or broadening of the pulse profiles due to scattering. As such, while it is a marked departure from standard pulsar timing analysis methods, it has clear applications for both new and current data sets, such as those from the European Pulsar Timing Array and International Pulsar Timing Array.
False-positive rates in two-point parametric linkage analysis.
Szymczak, Silke; Simpson, Claire L; Cropp, Cheryl D; Bailey-Wilson, Joan E
2014-01-01
Two-point linkage analyses of whole genome sequence data are a promising approach to identify rare variants that segregate with complex diseases in large pedigrees because, in theory, the causal variants have been genotyped. We used whole genome sequence data and simulated traits provided by Genetic Analysis Workshop 18 to evaluate the proportion of false-positive findings in a binary trait using classic two-point parametric linkage analysis. False-positive genome-wide significant log of odds (LOD) scores were identified in more than 80% of 50 replicates for a binary phenotype generated by dichotomizing a quantitative trait that was simulated with a polygenic component (that was not based on any of the provided whole genome sequence genotypes). In contrast, when the trait was truly nongenetic (created by randomly assigning affected-unaffected status), the number of false-positive results was well controlled. These results suggest that when using two-point linkage analyses on whole genome sequence data, one should carefully examine regions yielding significant two-point LOD scores with multipoint analysis and that a more stringent significance threshold may be needed.
A Bayesian Semi-parametric Approach for the Differential Analysis of Sequence Counts Data
Guindani, Michele; Sepúlveda, Nuno; Paulino, Carlos Daniel; Müller, Peter
2014-01-01
Summary Data obtained using modern sequencing technologies are often summarized by recording the frequencies of observed sequences. Examples include the analysis of T cell counts in immunological research and studies of gene expression based on counts of RNA fragments. In both cases the items being counted are sequences, of proteins and base pairs, respectively. The resulting sequence-abundance distribution is usually characterized by overdispersion. We propose a Bayesian semi-parametric approach to implement inference for such data. Besides modeling the overdispersion, the approach takes also into account two related sources of bias that are usually associated with sequence counts data: some sequence types may not be recorded during the experiment and the total count may differ from one experiment to another. We illustrate our methodology with two data sets, one regarding the analysis of CD4+ T cell counts in healthy and diabetic mice and another data set concerning the comparison of mRNA fragments recorded in a Serial Analysis of Gene Expression (SAGE) experiment with gastrointestinal tissue of healthy and cancer patients. PMID:24833809
A Bayesian Semi-parametric Approach for the Differential Analysis of Sequence Counts Data.
Guindani, Michele; Sepúlveda, Nuno; Paulino, Carlos Daniel; Müller, Peter
2014-04-01
Data obtained using modern sequencing technologies are often summarized by recording the frequencies of observed sequences. Examples include the analysis of T cell counts in immunological research and studies of gene expression based on counts of RNA fragments. In both cases the items being counted are sequences, of proteins and base pairs, respectively. The resulting sequence-abundance distribution is usually characterized by overdispersion. We propose a Bayesian semi-parametric approach to implement inference for such data. Besides modeling the overdispersion, the approach takes also into account two related sources of bias that are usually associated with sequence counts data: some sequence types may not be recorded during the experiment and the total count may differ from one experiment to another. We illustrate our methodology with two data sets, one regarding the analysis of CD4+ T cell counts in healthy and diabetic mice and another data set concerning the comparison of mRNA fragments recorded in a Serial Analysis of Gene Expression (SAGE) experiment with gastrointestinal tissue of healthy and cancer patients.
Takesue, Hiroki; Inagaki, Takahiro
2016-09-15
A coherent Ising machine based on degenerate optical parametric oscillators (DOPOs) is drawing attention as a way to find a solution to the ground-state search problem of the Ising model. Here we report the generation of time-multiplexed DOPOs at a 10 GHz clock frequency. We successfully generated >50,000 DOPOs using dual-pump four-wave mixing in a highly nonlinear fiber that formed a 1 km cavity, and observed phase bifurcation of the DOPOs, which suggests that the DOPOs can be used as stable artificial spins. In addition, we demonstrated the generation of more than 1 million DOPOs by extending the cavity length to 21 km. We also confirmed that the binary numbers obtained from the DOPO phase-difference measurement passed the NIST random number test, which suggests that we can obtain unbiased artificial spins.
Gudmundsson, P; Shahgaldi, K; Winter, R; Dencker, M; Kitlinski, M; Thorsson, O; Ljunggren, L; Willenheimer, R
2010-01-01
Real-time perfusion (RTP) adenosine stress echocardiography (ASE) can be used to visually evaluate myocardial ischaemia. The RTP power modulation technique, provides images for off-line parametric perfusion quantification using Qontrast software. From replenishment curves, this generates parametric images of peak signal intensity (A), myocardial blood flow velocity (beta) and myocardial blood flow (Axbeta) at rest and stress. This may be a tool for objective myocardial ischaemia evaluation. We assessed myocardial ischaemia by RTP-ASE Qontrast((R))-generated images, using 99mTc-tetrofosmin single-photon emission computed tomography (SPECT) as reference. Sixty-seven patients admitted to SPECT underwent RTP-ASE (SONOS 5500) during Sonovue infusion, before and throughout adenosine stress, also used for SPECT. Quantitative off-line analyses of myocardial perfusion by RTP-ASE Qontrast-generated A, beta and Axbeta images, at different time points during rest and stress, were blindly compared to SPECT. We analysed 201 coronary territories [corresponding to the left anterior descendent (LAD), left circumflex (LCx) and right coronary (RCA) arteries] from 67 patients. SPECT showed ischaemia in 18 patients. Receiver operator characteristics and kappa values showed that A, beta and Axbeta image interpretation significantly identified ischaemia in all territories (area under the curve 0.66-0.80, P = 0.001-0.05). Combined A, beta and Axbeta image interpretation gave the best results and the closest agreement was seen in the LAD territory: 89% accuracy; kappa 0.63; P<0.001. Myocardial isachemia can be evaluated in the LAD territory using RTP-ASE Qontrast-generated images, especially by combined A, beta and Axbeta image interpretation. However, the technique needs improvements regarding the LCx and RCA territories.
Basic parametric analysis for a multi-state model in hospital epidemiology.
von Cube, Maja; Schumacher, Martin; Wolkewitz, Martin
2017-07-20
The extended illness-death model is a useful tool to study the risks and consequences of hospital-acquired infections (HAIs). The statistical quantities of interest are the transition-specific hazard rates and the transition probabilities as well as attributable mortality (AM) and the population-attributable fraction (PAF). In the most general case calculation of these expressions is mathematically complex. When assuming time-constant hazards calculation of the quantities of interest is facilitated. In this situation the transition probabilities can be expressed in closed mathematical forms. The estimators for AM and PAF can be easily derived from these forms. In this paper, we show how to explicitly calculate all the transition probabilities of an extended-illness model with constant hazards. Using a parametric model to estimate the time-constant transition specific hazard rates of a data example, the transition probabilities, AM and PAF can be directly calculated. With a publicly available data example, we show how the approach provides first insights into principle time-dynamics and data structure. Assuming constant hazards facilitates the understanding of multi-state processes. Even in a non-constant hazards setting, the approach is a helpful first step for a comprehensive investigation of complex data.
Reis, Yara; Bernardo-Faura, Marti; Richter, Daniela; Wolf, Thomas; Brors, Benedikt; Hamacher-Brady, Anne; Eils, Roland; Brady, Nathan R
2012-01-01
Mitochondria exist as a network of interconnected organelles undergoing constant fission and fusion. Current approaches to study mitochondrial morphology are limited by low data sampling coupled with manual identification and classification of complex morphological phenotypes. Here we propose an integrated mechanistic and data-driven modeling approach to analyze heterogeneous, quantified datasets and infer relations between mitochondrial morphology and apoptotic events. We initially performed high-content, multi-parametric measurements of mitochondrial morphological, apoptotic, and energetic states by high-resolution imaging of human breast carcinoma MCF-7 cells. Subsequently, decision tree-based analysis was used to automatically classify networked, fragmented, and swollen mitochondrial subpopulations, at the single-cell level and within cell populations. Our results revealed subtle but significant differences in morphology class distributions in response to various apoptotic stimuli. Furthermore, key mitochondrial functional parameters including mitochondrial membrane potential and Bax activation, were measured under matched conditions. Data-driven fuzzy logic modeling was used to explore the non-linear relationships between mitochondrial morphology and apoptotic signaling, combining morphological and functional data as a single model. Modeling results are in accordance with previous studies, where Bax regulates mitochondrial fragmentation, and mitochondrial morphology influences mitochondrial membrane potential. In summary, we established and validated a platform for mitochondrial morphological and functional analysis that can be readily extended with additional datasets. We further discuss the benefits of a flexible systematic approach for elucidating specific and general relationships between mitochondrial morphology and apoptosis.
Infinitesimal-area 2D radiative analysis using parametric surface representation, through NURBS
Daun, K.J.; Hollands, K.G.T.
1999-07-01
The use of form factors in the treatment of radiant enclosures requires that the radiosity and surface properties be treated as uniform over finite areas. This restriction can be relaxed by applying an infinitesimal-area analysis, where the radiant exchange is taken to be between infinitesimal areas, rather than finite areas. This paper presents a generic infinitesimal-area formulation that can be applied to two-dimensional enclosure problems. (Previous infinitesimal-area analyses have largely been restricted to specific, one-dimensional problems.) Specifically, the paper shows how the analytical expression for the kernel of the integral equation can be obtained without human intervention, once the enclosure surface has been defined parametrically. This can be accomplished by using a computer algebra package or by using NURBS algorithms, which are the industry standard for the geometrical representations used in CAD-CAM codes. Once the kernel has been obtained by this formalism, the 2D integral equation can be set up and solved numerically. The result is a single general-purpose infinitesimal-area analysis code that can proceed from surface specification to solution. The authors have implemented this 2D code and tested it on 1D problems, whose solutions have been given in the literature, obtaining agreement commensurate with the accuracy of the published solutions.
NASA Astrophysics Data System (ADS)
Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Kwak, Byung-Joon
2013-07-01
This study aimed to quantitatively analyze data from diffusion tensor imaging (DTI) using statistical parametric mapping (SPM) in patients with brain disorders and to assess its potential utility for analyzing brain function. DTI was obtained by performing 3.0-T magnetic resonance imaging for patients with Alzheimer's disease (AD) and vascular dementia (VD), and the data were analyzed using Matlab-based SPM software. The two-sample t-test was used for error analysis of the location of the activated pixels. We compared regions of white matter where the fractional anisotropy (FA) values were low and the apparent diffusion coefficients (ADCs) were increased. In the AD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right sub-lobar insula, and right occipital lingual gyrus whereas the ADCs were significantly increased in the right inferior frontal gyrus and right middle frontal gyrus. In the VD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right limbic cingulate gyrus, and right sub-lobar caudate tail whereas the ADCs were significantly increased in the left lateral globus pallidus and left medial globus pallidus. In conclusion by using DTI and SPM analysis, we were able to not only determine the structural state of the regions affected by brain disorders but also quantitatively analyze and assess brain function.
NASA Astrophysics Data System (ADS)
Adamyan, H. H.; Kryuchkyan, G. Yu.
2006-08-01
We investigate semiclassical dynamics and quantum properties of light beams generated in time-modulated nondegenerate optical parametric oscillator (NOPO). Having in view production of continuous-variable (CV) entangled states of light beams we propose two experimentally feasible schemes of NOPO: (i) driven by continuously modulated pump field; (ii) under action of a periodic sequence of identical laser pulses. It is shown that the time modulation of pump field amplitude essentially improves the degree of CV entanglement in NOPO. On the whole the level of integral two-mode squeezing, which characterizes the degree of CV entanglement, goes below the standard limit established in an ordinary NOPO with monochromatic pumping. We develop semiclassical and quantum theories of these devices for both below- and above-threshold regimes of generation. Properties of CV entanglement for various operational regimes of the devices are discussed in the time domain in application to time-resolved quantum information technologies. Our analytical results are in well agreement with the results of numerical simulation and support a concept of CV entangled states of time-modulated light beams.
Adamyan, H. H.; Kryuchkyan, G. Yu.
2006-08-15
We investigate semiclassical dynamics and quantum properties of light beams generated in time-modulated nondegenerate optical parametric oscillator (NOPO). Having in view production of continuous-variable (CV) entangled states of light beams we propose two experimentally feasible schemes of NOPO: (i) driven by continuously modulated pump field; (ii) under action of a periodic sequence of identical laser pulses. It is shown that the time modulation of pump field amplitude essentially improves the degree of CV entanglement in NOPO. On the whole the level of integral two-mode squeezing, which characterizes the degree of CV entanglement, goes below the standard limit established in an ordinary NOPO with monochromatic pumping. We develop semiclassical and quantum theories of these devices for both below- and above-threshold regimes of generation. Properties of CV entanglement for various operational regimes of the devices are discussed in the time domain in application to time-resolved quantum information technologies. Our analytical results are in well agreement with the results of numerical simulation and support a concept of CV entangled states of time-modulated light beams.
Varabyova, Yauheniya; Schreyögg, Jonas
2013-09-01
There is a growing interest in the cross-country comparisons of the performance of national health care systems. The present work provides a comparison of the technical efficiency of the hospital sector using unbalanced panel data from OECD countries over the period 2000-2009. The estimation of the technical efficiency of the hospital sector is performed using nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Internal and external validity of findings is assessed by estimating the Spearman rank correlations between the results obtained in different model specifications. The panel-data analyses using two-step DEA and one-stage SFA show that countries, which have higher health care expenditure per capita, tend to have a more technically efficient hospital sector. Whether the expenditure is financed through private or public sources is not related to the technical efficiency of the hospital sector. On the other hand, the hospital sector in countries with higher income inequality and longer average hospital length of stay is less technically efficient. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Non-parametric seismic hazard analysis in the presence of incomplete data
NASA Astrophysics Data System (ADS)
Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush
2017-01-01
The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.
NASA Technical Reports Server (NTRS)
Mizukami, M.; Saunders, J. D.
1995-01-01
The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.
A Further Contribution to the Parametric Analysis of a PCM Energy Storage System
NASA Astrophysics Data System (ADS)
Casano, G.; Piva, S.
2017-01-01
In recent years heat storage using phase change materials has been also considered in the thermal control of electronic devices. In a recent work we presented some results for a parametric analysis of an energy storage system with a phase change material undergoing a two-levels steady-periodic heat boundary condition, as happens in certain electronic equipments. In particular, a hybrid system composed of a finned surface partially filled with a PCM, was analysed. This solution, which combines both passive (PCM) and active (fins and fans) cooling solutions, is of interest in high power amplifiers characterized by different levels of power dissipation, as is the case of the telecom base station power amplifiers, where the power is proportional to the traffic load. In the present paper we analyze some parameter previously not investigated, in particular the dimensionless transition temperature, for the role played by this parameter in the limitation of the operating temperature reached during the peaks of the power input. The study has provided further useful information for the design of these hybrid cooling systems.
A Parametric Cycle Analysis of a Separate-Flow Turbofan with Interstage Turbine Burner
NASA Technical Reports Server (NTRS)
Marek, C. J. (Technical Monitor); Liew, K. H.; Urip, E.; Yang, S. L.
2005-01-01
Today's modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. This study focuses on a parametric cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The JTB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, linear relation between high- and low-pressure turbines, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB.
Brooks, S P; Suelter, C H
1986-09-01
An IBM computer program, WILMAN4, is described which calculates the estimates, Km, V and Km/V from initial velocity measurements according to one of four statistical methods. Three of these methods involve linear regression analysis using weights given by assuming: (i) constant absolute error (G.N. Wilkinson, 1961, Biochem J., 80, 324-332), (ii) constant relative error (G. Johansen and R. Lumry, 1961, C.R. Trav. Lab. Carlsberg, 32, 185-214) and (iii) an error function in between the above two cases. (A. Cornish-Bowden, 1976, Principles of Enzyme Kinetics, Butterworths Inc, Boston, Mass., pp. 168-193). The fourth method is a non-parametric procedure derived by Eisenthal and Cornish-Bowden (Biochim. Biophys. Acta, 532 (1974) 268-272). Residuals are obtained by subtracting the experimental and the calculated velocities. Outliers, or residuals which are greater than two experimental standard deviations, can be identified and removed from the data set. If the sequence of positive and negative signs of the residuals is random as determined by a statistical probability calculation, the data set is assumed to obey the Michaelis-Menten equation.
Ramtani, S
2007-08-01
The relative importance of the various parameters in inducing bone mass loss and osteoclastic perforations is still controversial. Therefore, there is a significant motivation to better understand the parameters behind such dynamic response, and great interest to carry out a parametric sensitivity study as it can provide useful information. As an application, the widely-accepted bone remodelling equation [M.G. Mullender, R. Huiskes, H. Weinans, A physiological approach to the simulation of bone remodeling as self organizational control process, J. Biomech. 27 (1994) 1389.] is investigated using the "n units" model [M. Zidi, S. Ramtani, Bone remodeling theory applied to the study of n unit-elements model, J Biomech. 32 (1999) 743.]. This analysis pointed out that the power in the modulus density relationship p and the power to which density is raised in normalizing the energy stimulus q, known as strongly implicated in the stability condition of the remodelling process, were also stated as insensitive parameters in the bone loss area.
Stott, Shannon L; Irimia, Daniel; Karlsson, Jens O M
2004-04-01
A microscale theoretical model of intracellular ice formation (IIF) in a heterogeneous tissue volume comprising a tumor mass and surrounding normal tissue is presented. Intracellular ice was assumed to form either by intercellular ice propagation or by processes that are not affected by the presence of ice in neighboring cells (e.g., nucleation or mechanical rupture). The effects of cryosurgery on a 2D tissue consisting of 10(4) cells were simulated using a lattice Monte Carlo technique. A parametric analysis was performed to assess the specificity of IIF-related cell damage and to identify criteria for minimization of collateral damage to the healthy tissue peripheral to the tumor. Among the parameters investigated were the rates of interaction-independent IIF and intercellular ice propagation in the tumor and in the normal tissue, as well as the characteristic length scale of thermal gradients in the vicinity of the cryosurgical probe. Model predictions suggest gap junctional intercellular communication as a potential new target for adjuvant therapies complementing the cryosurgical procedure.
Parametric analysis of occupant ankle and tibia injuries in frontal impact
Mo, Fuhao; Jiang, Xiaoqing; Duan, Shuyong; Xiao, Zhi; Shi, Wei
2017-01-01
Objective Non-fatal tibia and ankle injuries without proper protection from the restraint system has gotten wide attention from researchers. This study aimed to investigate occupant tibia and ankle injuries under realistic frontal impact environment that is rarely considered in previous experimental and simulant studies. Methods An integrated occupant-vehicle model was established by coupling an isolated car cab model and a hybrid occupant model with a biofidelic pelvis-lower limb model, while its loading conditions were extracted from the realistic full-frontal impact test. A parametric study was implemented concerning instrument panel (IP) design and pedal intrusion/rotation parameters. Results The significant influences of the IP angle, pedal intrusion and pedal rotation on tibia axial force, tibia bending moment and ankle dorsiflexion angle are noted. By coupling their effects, a new evaluation index named CAIEI (Combined Ankle Injury Evaluation Index) is established to evaluate ankle injury (including tibia fractures in ankle region) risk and severity in robustness. Conclusions Overall results and analysis indicate that ankle dorsiflexion angle should be considered when judging the injury in lower limb under frontal impact. Meanwhile, the current index with coupling effects of tibia axial force, bending moment and ankle dorsiflexion angle is in a good correlation with the simulation injury outcomes. PMID:28910377
Doherty, Joshua R.; Dumont, Douglas M.; Trahey, Gregg E.; Palmeri, Mark L.
2012-01-01
Plaque rupture is the most common cause of complications such as stroke and coronary heart failure. Recent histopathological evidence suggests that several plaque features, including a large lipid core and a thin fibrous cap, are associated with plaques most at risk for rupture. Acoustic Radiation Force Impulse (ARFI) imaging, a recently developed ultrasound-based elasticity imaging technique, shows promise for imaging these features noninvasively. Clinically, this could be used to distinguish vulnerable plaques, for which surgical intervention may be required, from those less prone to rupture. In this study, a parametric analysis using Finite-Element Method (FEM) models was performed to simulate ARFI imaging of five different carotid artery plaques across a wide range of material properties. It was demonstrated that ARFI could resolve the softer lipid pool from the surrounding, stiffer media and fibrous cap and was most dependent upon the stiffness of the lipid pool component. Stress concentrations due to an ARFI excitation were located in the media and fibrous cap components. In all cases, the maximum Von Mises stress was < 1.2 kPa. In comparing these results with others investigating plaque rupture, it is concluded that while the mechanisms may be different, the Von Mises stresses imposed by ARFI are orders of magnitude lower than the stresses associated with blood pressure. PMID:23122224
NASA Astrophysics Data System (ADS)
Ozden, Ender; Tari, Ilker
2016-02-01
A Polymer Electrolyte Membrane (PEM) fuel cell is numerically investigated both as fresh and as degraded with the help of observed degradation patterns reported in the literature. The fresh fuel cell model is validated and verified with the data from the literature. Modifying the model by varying the parameters affected by degradation, a degraded PEM fuel cell model is created. The degraded fuel cell is parametrically analyzed by using a commercial Computational Fluid Dynamics (CFD) software. The investigated parameters are the membrane equivalent weight, the Catalyst Layer (CL) porosity and viscous resistance, the Gas Diffusion Layer (GDL) porosity and viscous resistance, and the bipolar plate contact resistance. It is shown for the first time that PEM fuel cell overall degradation can be numerically estimated by combining experimental data from degraded individual components. By comparing the simulation results for the fresh and the degraded PEM fuel cells for two years of operation, it is concluded that the effects of overall degradation on cell potential is significant - estimated to be 17% around the operating point of the fuel cell at 0.95 V open circuit voltage and 70 °C operating temperature.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Dismuke, C E; Sena, V
1999-05-01
The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospital-specific and national DRG case-mix rates. Since implementation in 1990, the percentage of each hospital's budget based on hospital specific costs was to decrease, while the percentage based on DRG case-mix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospital-specific cost basis. While all other non-NHS third party payers are currently paying based on DRGs, the adoption of DRG case-mix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a two-stage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992-1994, using both parametric and non-parametric frontier models. We find evidence
Predicting analysis times in randomized clinical trials with cancer immunotherapy.
Chen, Tai-Tsang
2016-02-01
A new class of immuno-oncology agents has recently been shown to induce long-term survival in a proportion of treated patients. This phenomenon poses unique challenges for the prediction of analysis time in event-driven studies. If the phenomenon of long-term survival is not accounted for properly, the accuracy of the prediction based on the existing methods may be substantially compromised. Parametric mixture cure rate models with the best fit to empirical clinical trial data were proposed to predict analysis times in immuno-oncology studies during the course of the study. The proposed prediction procedure also accounts for the mechanism of action introduced by cancer immunotherapies, such as delayed and long-term survival effects. The proposed methodology was retrospectively applied to a randomized phase III immuno-oncology clinical trial. Among various parametric mixture cure rate models, the Weibull cure rate model was found to be the best-fitting model for this study. The unique survival kinetics of cancer immunotherapy was captured in the longitudinal predictions of the final analysis times. Parametric mixture cure rate models, along with estimated long-term survival rates, probabilities of study incompletion, and expected statistical powers over time, provide immuno-oncology clinical trial researchers with a useful tool for continuous event monitoring and prediction of analysis times, such that informed decisions with quantifiable risks can be made for better resource and logistic planning.
Marmarelis, Vasilis Z.; Shin, Dae C.; Zhang, Yaping; Kautzky-Willer, Alexandra; Pacini, Giovanni; D’Argenio, David Z.
2013-01-01
Background: Modeling studies of the insulin–glucose relationship have mainly utilized parametric models, most notably the minimal model (MM) of glucose disappearance. This article presents results from the comparative analysis of the parametric MM and a nonparametric Laguerre based Volterra Model (LVM) applied to the analysis of insulin modified (IM) intravenous glucose tolerance test (IVGTT) data from a clinical study of gestational diabetes mellitus (GDM). Methods: An IM IVGTT study was performed 8 to 10 weeks postpartum in 125 women who were diagnosed with GDM during their pregnancy [population at risk of developing diabetes (PRD)] and in 39 control women with normal pregnancies (control subjects). The measured plasma glucose and insulin from the IM IVGTT in each group were analyzed via a population analysis approach to estimate the insulin sensitivity parameter of the parametric MM. In the nonparametric LVM analysis, the glucose and insulin data were used to calculate the first-order kernel, from which a diagnostic scalar index representing the integrated effect of insulin on glucose was derived. Results: Both the parametric MM and nonparametric LVM describe the glucose concentration data in each group with good fidelity, with an improved measured versus predicted r2 value for the LVM of 0.99 versus 0.97 for the MM analysis in the PRD. However, application of the respective diagnostic indices of the two methods does result in a different classification of 20% of the individuals in the PRD. Conclusions: It was found that the data based nonparametric LVM revealed additional insights about the manner in which infused insulin affects blood glucose concentration. PMID:23911176
Marmarelis, Vasilis Z; Shin, Dae C; Zhang, Yaping; Kautzky-Willer, Alexandra; Pacini, Giovanni; D'Argenio, David Z
2013-07-01
Modeling studies of the insulin-glucose relationship have mainly utilized parametric models, most notably the minimal model (MM) of glucose disappearance. This article presents results from the comparative analysis of the parametric MM and a nonparametric Laguerre based Volterra Model (LVM) applied to the analysis of insulin modified (IM) intravenous glucose tolerance test (IVGTT) data from a clinical study of gestational diabetes mellitus (GDM). An IM IVGTT study was performed 8 to 10 weeks postpartum in 125 women who were diagnosed with GDM during their pregnancy [population at risk of developing diabetes (PRD)] and in 39 control women with normal pregnancies (control subjects). The measured plasma glucose and insulin from the IM IVGTT in each group were analyzed via a population analysis approach to estimate the insulin sensitivity parameter of the parametric MM. In the nonparametric LVM analysis, the glucose and insulin data were used to calculate the first-order kernel, from which a diagnostic scalar index representing the integrated effect of insulin on glucose was derived. Both the parametric MM and nonparametric LVM describe the glucose concentration data in each group with good fidelity, with an improved measured versus predicted r² value for the LVM of 0.99 versus 0.97 for the MM analysis in the PRD. However, application of the respective diagnostic indices of the two methods does result in a different classification of 20% of the individuals in the PRD. It was found that the data based nonparametric LVM revealed additional insights about the manner in which infused insulin affects blood glucose concentration. © 2013 Diabetes Technology Society.
Gao, Feng; Manatunga, Amita K; Chen, Shande
2007-02-20
Often in many biomedical and epidemiologic studies, estimating hazards function is of interest. The Breslow's estimator is commonly used for estimating the integrated baseline hazard, but this estimator requires the functional form of covariate effects to be correctly specified. It is generally difficult to identify the true functional form of covariate effects in the presence of time-dependent covariates. To provide a complementary method to the traditional proportional hazard model, we propose a tree-type method which enables simultaneously estimating both baseline hazards function and the effects of time-dependent covariates. Our interest will be focused on exploring the potential data structures rather than formal hypothesis testing. The proposed method approximates the baseline hazards and covariate effects with step-functions. The jump points in time and in covariate space are searched via an algorithm based on the improvement of the full log-likelihood function. In contrast to most other estimating methods, the proposed method estimates the hazards function rather than integrated hazards. The method is applied to model the risk of withdrawal in a clinical trial that evaluates the anti-depression treatment in preventing the development of clinical depression. Finally, the performance of the method is evaluated by several simulation studies.
Ripley, David P; Motwani, Manish; Brown, Julia M; Nixon, Jane; Everett, Colin C; Bijsterveld, Petra; Maredia, Neil; Plein, Sven; Greenwood, John P
2015-07-15
The CE-MARC study assessed the diagnostic performance investigated the use of cardiovascular magnetic resonance (CMR) in patients with suspected coronary artery disease (CAD). The study used a multi-parametric CMR protocol assessing 4 components: i) left ventricular function; ii) myocardial perfusion; iii) viability (late gadolinium enhancement (LGE)) and iv) coronary magnetic resonance angiography (MRA). In this pre-specified CE-MARC sub-study we assessed the diagnostic accuracy of the individual CMR components and their combinations. All patients from the CE-MARC population (n = 752) were included using data from the original blinded-read. The four individual core components of the CMR protocol was determined separately and then in paired and triplet combinations. Results were then compared to the full multi-parametric protocol. CMR and X-ray angiography results were available in 676 patients. The maximum sensitivity for the detection of significant CAD by CMR was achieved when all four components were used (86.5%). Specificity of perfusion (91.8%), function (93.7%) and LGE (95.8%) on its own was significantly better than specificity of the multi-parametric protocol (83.4%) (all P < 0.0001) but with the penalty of decreased sensitivity (86.5% vs. 76.9%, 47.4% and 40.8% respectively). The full multi-parametric protocol was the optimum to rule-out significant CAD (Likelihood Ratio negative (LR-) 0.16) and the LGE component alone was the best to rue-in CAD (LR+ 9.81). Overall diagnostic accuracy was similar with the full multi-parametric protocol (85.9%) compared to paired and triplet combinations. The use of coronary MRA within the full multi-parametric protocol had no additional diagnostic benefit compared to the perfusion/function/LGE combination (overall accuracy 84.6% vs. 84.2% (P = 0.5316); LR- 0.16 vs. 0.21; LR+ 5.21 vs. 5.77). From this pre-specified sub-analysis of the CE-MARC study, the full multi-parametric protocol had the highest sensitivity
NASA Astrophysics Data System (ADS)
Jadhav, Tushar S.; Lele, Mandar M.
2017-04-01
The experimental performance of different heat pipe heat exchanger (HPHX) configurations using distilled water as the working fluid is reported in the present study. The three HPHX configurations in the present investigation include HPHX with single wick structure (HPHX 1), HPHX with composite wick structure (HPHX 2) and hybrid HPHX (HPHX 3) which is the combination of HPHX 1 and HPHX 2. The parameters considered for the parametric analysis of HPHX in all the three configurations are outdoor air dry bulb temperature entering the evaporator section of HPHX (OADBT), return air dry bulb temperature entering the condenser section of HPHX (RADBT), outdoor air velocity (Ve) and return air velocity (Vc). The OADBT is varied between 40 and 24 °C and the outdoor & return air velocities between 0.6 and 2.4 m/s. The parametric analysis of HPHX without evaporative cooling is studied for RADBT = 24 °C whereas RADBT is maintained at 20 °C for the parametric analysis of HPHX integrated with evaporative cooling. In comparison with HPHX without evaporative cooling, the performance of HPHX with evaporative cooling is enhanced by 17% for single wick structure (HPHX 1), 47% for composite wick structure (HPHX 2) and 59% for hybrid HPHX (HPHX 3) for OADBT = 40 °C and at Ve = Vc of 0.6 m/s. The results of the experimental analysis highlights the benefits of HPHX integrated with evaporative cooling for achieving significant energy savings in air conditioning application.
NASA Astrophysics Data System (ADS)
Nguyen, C.; Chandra, C. V.
2014-12-01
The separation of radar signatures depicting cloud and drizzle within a pulse radar volume is a fundamental problem whose solution is required to decouple the microphysical and dynamical processes introduced by turbulence. Such a solution would lead to the development of new meteorological products.In this presentation, a method to detect, separate and estimate multiple radar echoes from cloud and drizzle obtained from vertically pointing cloud Doppler spectra is described. In the case when only clouds are present, the Doppler spectrum is symmetrical and is well approximated by a Gaussian. To extract cloud echoes, a parametric maximum likelihood estimator in the time domain is employed using the recorded radar Doppler spectra data. To detect skewness in the radar spectrum, goodness of fit parameters are defined. It is shown that these new detection parameters exhibit a low level sensitivity to poor signal-to-noise ratios and large signal spectrum widths. The proposed method can consequently be applied to signals with shorter integration time; this significantly reduces the impact of small-scale dynamics present in the Doppler spectrum. Additionally, signals near the cloud top and cloud base are used as constraints to optimize the detection and estimation algorithm's performance.The applications of the technique include inference of the vertical air motion and the particle size distribution of the drizzle. The method will be tested on datasets that have been collected by the ARM cloud radars.
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
NASA Astrophysics Data System (ADS)
Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry
2017-06-01
We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.
Non-parametric graphnet-regularized representation of dMRI in space and time.
Fick, Rutger H J; Petiet, Alexandra; Santin, Mathieu; Philippe, Anne-Charlotte; Lehericy, Stephane; Deriche, Rachid; Wassermann, Demian
2017-09-14
Effective representation of the four-dimensional diffusion MRI signal - varying over three-dimensional q-space and diffusion time τ - is a sought-after and still unsolved challenge in diffusion MRI (dMRI). We propose a functional basis approach that is specifically designed to represent the dMRI signal in this qτ-space. Following recent terminology, we refer to our qτ-functional basis as "qτ-dMRI". qτ-dMRI can be seen as a time-dependent realization of q-space imaging by Paul Callaghan and colleagues. We use GraphNet regularization - imposing both signal smoothness and sparsity - to drastically reduce the number of diffusion-weighted images (DWIs) that is needed to represent the dMRI signal in the qτ-space. As the main contribution, qτ-dMRI provides the framework to - without making biophysical assumptions - represent the qτ-space signal and estimate time-dependent q-space indices (qτ-indices), providing a new means for studying diffusion in nervous tissue. We validate our method on both in-silico generated data using Monte-Carlo simulations and an in-vivo test-retest study of two C57Bl6 wild-type mice, where we found good reproducibility of estimated qτ-index values and trends. In the hopes of opening up new τ-dependent venues of studying nervous tissues, qτ-dMRI is the first of its kind in being specifically designed to provide open interpretation of the qτ-diffusion signal. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio
2015-01-01
The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the
Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr
2015-01-13
The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the
NASA Astrophysics Data System (ADS)
Touil, Sami; Degre, Aurore; Nacer Chabaca, Mohamed
2016-12-01
Improving the accuracy of pedotransfer functions (PTFs) requires studying how prediction uncertainty can be apportioned to different sources of uncertainty in inputs. In this study, the question addressed was as follows: which variable input is the main or best complementary predictor of water retention, and at which water potential? Two approaches were adopted to generate PTFs: multiple linear regressions (MLRs) for point PTFs and multiple nonlinear regressions (MNLRs) for parametric PTFs. Reliability tests showed that point PTFs provided better estimates than parametric PTFs (root mean square error, RMSE: 0.0414 and 0.0444 cm3 cm-3, and 0.0613 and 0.0605 cm3 cm-3 at -33 and -1500 kPa, respectively). The local parametric PTFs provided better estimates than Rosetta PTFs at -33 kPa. No significant difference in accuracy, however, was found between the parametric PTFs and Rosetta H2 at -1500 kPa with RMSE values of 0.0605 cm3 cm-3 and 0.0636 cm3 cm-3, respectively. The results of global sensitivity analyses (GSAs) showed that the mathematical formalism of PTFs and their input variables reacted differently in terms of point pressure and texture. The point and parametric PTFs were sensitive mainly to the sand fraction in the fine- and medium-textural classes. The use of clay percentage (C %) and bulk density (BD) as inputs in the medium-textural class improved the estimation of PTFs at -33 kPa.
Non-parametric causal assessment in deep-time geological records
NASA Astrophysics Data System (ADS)
Agasøster Haaga, Kristian; Diego, David; Brendryen, Jo; Hannisdal, Bjarte
2016-04-01
The interplay between climate variables and the timing of their feedback mechanisms are typically investigated using fully coupled climate system models. However, as we delve deeper into the geological past, mechanistic process models become increasingly uncertain, making nonparametric approaches more attractive. Here we explore the use of two conceptually different methods for nonparametric causal assessment in palaeoenvironmental archives of the deep past: convergent cross mapping (CCM) and information transfer (IT). These methods have the potential to capture interactions in complex systems even when data are sparse and noisy, which typically characterises geological proxy records. We apply these methods to proxy time series that capture interlinked components of the Earth system at different temporal scales, and quantify both the interaction strengths and the feedback lags between the variables. Our examples include the linkage between the ecological prominence of common planktonic species to oceanographic changes over the last ~65 million years, and global interactions and teleconnections within the climate system during the last ~800,000 years.
Time- and power-dependent operation of a parametric spin-wave amplifier
Brächer, T.; Heussner, F.; Pirro, P.; Fischer, T.; Geilen, M.; Heinz, B.; Lägel, B.; Serga, A. A.; Hillebrands, B.
2014-12-08
We present the experimental observation of the localized amplification of externally excited, propagating spin waves in a transversely in-plane magnetized Ni{sub 81}Fe{sub 19} magnonic waveguide by means of parallel pumping. By employing microfocussed Brillouin light scattering spectroscopy, we analyze the dependency of the amplification on the applied pumping power and on the delay between the input spin-wave packet and the pumping pulse. We show that there are two different operation regimes: At large pumping powers, the spin-wave packet needs to enter the amplifier before the pumping is switched on in order to be amplified while at low powers the spin-wave packet can arrive at any time during the pumping pulse.
Lucero-Acuña, Armando; Guzmán, Roberto
2015-10-15
A mathematical model of drug release that incorporates the simultaneous contributions of initial burst, nanoparticle degradation-relaxation and diffusion was developed and used to effectively describe the release of a kinase inhibitor and anticancer drug, PHT-427. The encapsulation of this drug into PLGA nanoparticles was performed by following the single emulsion-solvent evaporation technique and the release was determined in phosphate buffer pH 7.4 at 37 °C. The size of nanoparticles was obtained in a range of 162-254 nm. The experimental release profiles showed three well defined phases: an initial fast drug release, followed by a nanoparticle degradation-relaxation slower release and then a diffusion release phase. The effects of the controlled release most relevant parameters such as drug diffusivity, initial burst constant, nanoparticle degradation-relaxation constant, and the time to achieve a maximum rate of drug release were evaluated by a parametrical analysis. The theoretical release studies were corroborated experimentally by evaluating the cytotoxicity effectiveness of the inhibitor AKT/PDK1 loaded nanoparticles over BxPC-3 pancreatic cancer cells in vitro. These studies show that the encapsulated inhibitor AKT/PDK1 in the nanoparticles is more accessible and thus more effective when compared with the drug alone, indicating their potential use in chemotherapeutic applications.
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
2009-08-13
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
Brida, Giorgio; Chekhova, Maria; Genovese, Marco; Ruo-Berchera, Ivano
2008-08-18
Spontaneous parametric down conversion (SPDC) has been largely exploited as a tool for absolute calibration of photon-counting detectors, i.e detectors registering very small photon fluxes. In [J. Opt. Soc. Am. B 23, 2185 (2006)] we derived a method for absolute calibration of analog detectors using SPDC emission at higher photon fluxes, where the beam is seen as a continuum by the detector. Nevertheless intrinsic limitations appear when high-gain regime of SPDC is required to reach even larger photon fluxes. Here we show that stimulated parametric down conversion allow one to avoid this limitation, since stimulated photon fluxes are increased by the presence of the seed beam.
Quantum analysis of the nondegenerate optical parametric oscillator with injected signal
Coutinho dos Santos, B.; Dechoum, K.; Khoury, A.Z.; Silva, L.F. da; Olsen, M.K.
2005-09-15
In this paper we study the nondegenerate optical parametric oscillator with injected signal, both analytically and numerically. We develop a perturbation approach which allows us to find approximate analytical solutions, starting from the full equations of motion in the positive-P representation. We demonstrate the regimes of validity of our approximations via comparison with the full stochastic results. We find that, with reasonably low levels of injected signal, the system allows for demonstrations of quantum entanglement and the Einstein-Podolsky-Rosen paradox. In contrast to the normal optical parametric oscillator operating below threshold, these features are demonstrated with relatively intense fields.
Parametric uncertainty analysis of pulse wave propagation in a model of a human arterial network
Xiu Dongbin Sherwin, Spencer J.
2007-10-01
Reduced models of human arterial networks are an efficient approach to analyze quantitative macroscopic features of human arterial flows. The justification for such models typically arise due to the significantly long wavelength associated with the system in comparison to the lengths of arteries in the networks. Although these types of models have been employed extensively and many issues associated with their implementations have been widely researched, the issue of data uncertainty has received comparatively little attention. Similar to many biological systems, a large amount of uncertainty exists in the value of the parameters associated with the models. Clearly reliable assessment of the system behaviour cannot be made unless the effect of such data uncertainty is quantified. In this paper we present a study of parametric data uncertainty in reduced modelling of human arterial networks which is governed by a hyperbolic system. The uncertain parameters are modelled as random variables and the governing equations for the arterial network therefore become stochastic. This type stochastic hyperbolic systems have not been previously systematically studied due to the difficulties introduced by the uncertainty such as a potential change in the mathematical character of the system and imposing boundary conditions. We demonstrate how the application of a high-order stochastic collocation method based on the generalized polynomial chaos expansion, combined with a discontinuous Galerkin spectral/hp element discretization in physical space, can successfully simulate this type of hyperbolic system subject to uncertain inputs with bounds. Building upon a numerical study of propagation of uncertainty and sensitivity in a simplified model with a single bifurcation, a systematical parameter sensitivity analysis is conducted on the wave dynamics in a multiple bifurcating human arterial network. Using the physical understanding of the dynamics of pulse waves in these types of
Computational meta-analysis of statistical parametric maps in major depression.
Arnone, Danilo; Job, Dominic; Selvaraj, Sudhakar; Abe, Osamu; Amico, Francesco; Cheng, Yuqi; Colloby, Sean J; O'Brien, John T; Frodl, Thomas; Gotlib, Ian H; Ham, Byung-Joo; Kim, M Justin; Koolschijn, P Cédric M P; Périco, Cintia A-M; Salvadore, Giacomo; Thomas, Alan J; Van Tol, Marie-José; van der Wee, Nic J A; Veltman, Dick J; Wagner, Gerd; McIntosh, Andrew M
2016-04-01
Several neuroimaging meta-analyses have summarized structural brain changes in major depression using coordinate-based methods. These methods might be biased toward brain regions where significant differences were found in the original studies. In this study, a novel voxel-based technique is implemented that estimates and meta-analyses between-group differences in grey matter from individual MRI studies, which are then applied to the study of major depression. A systematic review and meta-analysis of voxel-based morphometry studies were conducted comparing participants with major depression and healthy controls by using statistical parametric maps. Summary effect sizes were computed correcting for multiple comparisons at the voxel level. Publication bias and heterogeneity were also estimated and the excess of heterogeneity was investigated with metaregression analyses. Patients with major depression were characterized by diffuse bilateral grey matter loss in ventrolateral and ventromedial frontal systems extending into temporal gyri compared to healthy controls. Grey matter reduction was also detected in the right parahippocampal and fusiform gyri, hippocampus, and bilateral thalamus. Other areas included parietal lobes and cerebellum. There was no evidence of statistically significant publication bias or heterogeneity. The novel computational meta-analytic approach used in this study identified extensive grey matter loss in key brain regions implicated in emotion generation and regulation. Results are not biased toward the findings of the original studies because they include all available imaging data, irrespective of statistically significant regions, resulting in enhanced detection of additional areas of grey matter loss. © 2016 Wiley Periodicals, Inc.
Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A
2016-11-01
Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P < 0·001). The statistical model showed that bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure. © 2016 John Wiley & Sons Ltd.
Analysis and Parametric Investigation of Active Open Cross Section Thin Wall Beams
NASA Astrophysics Data System (ADS)
Griffiths, James
The static behaviour of active Open Cross Section Thin Wall Beams (OCSTWB) with embedded Active/Macro Fibre Composites (AFCs/MFCs) has been investigated for the purpose of advancing the fundamental theory needed in the development of advanced smart structures. An efficient code that can analyze active OCSTWB using analytical equations has been studied. Various beam examples have been investigated in order to verify this recently developed analytical active OCSTWB analysis tool. The cross sectional stiffness constants and induced force, moments and bimoment predicted by this analytical code have been compared with those predicted by the 2-D finite element beam cross section analysis codes called the Variational Asymptotic Beam Sectional (VABS) analysis and the University of Michigan VABS (UM/VABS). Good agreement was observed between the results obtained from the analytical tool and VABS. The calculated cross sectional stiffness constants and induced force/moments, the constitutive relation and the six intrinstic static equilibrium equations for OCSTWB were all used together in a first-order accurate forward difference scheme in order to determine the average twist and deflections along the beam span. In order to further verify the analytical code, the static behaviour of a number of beam examples was investigated using 3-D Finite Element Analysis (FEA). For a particular cross section, the rigid body twist and displacements were minimized with the displacements of all the nodes in the 3-D FEA model that compose the cross section. This was done for a number of cross sections along the beam span in order to recover the global beam twist and displacement profiles from the 3-D FEA results. The global twist and deflections predicted by the analytical code agreed closely with those predicted by UM/VABS and 3-D FEA. The study was completed by a parametric investigation to determine the boundary conditions and the composite ply lay-ups of the active and passive plies that
Parametric study and performance analysis of hybrid rocket motors with double-tube configuration
NASA Astrophysics Data System (ADS)
Yu, Nanjia; Zhao, Bo; Lorente, Arnau Pons; Wang, Jue
2017-03-01
The practical implementation of hybrid rocket motors has historically been hampered by the slow regression rate of the solid fuel. In recent years, the research on advanced injector designs has achieved notable results in the enhancement of the regression rate and combustion efficiency of hybrid rockets. Following this path, this work studies a new configuration called double-tube characterized by injecting the gaseous oxidizer through a head end injector and an inner tube with injector holes distributed along the motor longitudinal axis. This design has demonstrated a significant potential for improving the performance of hybrid rockets by means of a better mixing of the species achieved through a customized injection of the oxidizer. Indeed, the CFD analysis of the double-tube configuration has revealed that this design may increase the regression rate over 50% with respect to the same motor with a conventional axial showerhead injector. However, in order to fully exploit the advantages of the double-tube concept, it is necessary to acquire a deeper understanding of the influence of the different design parameters in the overall performance. In this way, a parametric study is carried out taking into account the variation of the oxidizer mass flux rate, the ratio of oxidizer mass flow rate injected through the inner tube to the total oxidizer mass flow rate, and injection angle. The data for the analysis have been gathered from a large series of three-dimensional numerical simulations that considered the changes in the design parameters. The propellant combination adopted consists of gaseous oxygen as oxidizer and high-density polyethylene as solid fuel. Furthermore, the numerical model comprises Navier-Stokes equations, k-ε turbulence model, eddy-dissipation combustion model and solid-fuel pyrolysis, which is computed through user-defined functions. This numerical model was previously validated by analyzing the computational and experimental results obtained for
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
NASA Astrophysics Data System (ADS)
Fakhimi Derakhshan, Siavash; Fatehi, Alireza
2015-09-01
A non-monotonic Lyapunov function (NMLF) is deployed to design a robust H2 fuzzy observer-based control problem for discrete-time nonlinear systems in the presence of parametric uncertainties. The uncertain nonlinear system is presented as a Takagi and Sugeno (T-S) fuzzy model with norm-bounded uncertainties. The states of the fuzzy system are estimated by a fuzzy observer and the control design is established based on a parallel distributed compensation scheme. In order to derive a sufficient condition to establish the global asymptotic stability of the proposed closed-loop fuzzy system, an NMLF is adopted and an upper bound on the quadratic cost function is provided. The existence of a robust H2 fuzzy observer-based controller is expressed as a sufficient condition in the form of linear matrix inequalities (LMIs) and a sub-optimal fuzzy observer-based controller in the sense of cost bound minimization is obtained by utilising the aforementioned LMI optimisation techniques. Finally, the effectiveness of the proposed scheme is shown through an example.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva
2017-06-01
Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.
LONG TERM SURVIVAL FOLLOWING TRAUMATIC BRAIN INJURY: A POPULATION BASED PARAMETRIC SURVIVAL ANALYSIS
Fuller, Gordon Ward; Ransom, Jeanine; Mandrekar, Jay; Brown, Allen W
2017-01-01
Background Long term mortality may be increased following traumatic brain injury (TBI); however the degree to which survival could be reduced is unknown. We aimed to model life expectancy following post-acute TBI to provide predictions of longevity and quantify differences in survivorship with the general population. Methods A population based retrospective cohort study using data from the Rochester Epidemiology Project (REP) was performed. A random sample of patients from Olmsted County, Minnesota with a confirmed TBI between 1987 and 2000 was identified and vital status determined in 2013. Parametric survival modelling was then used to develop a model to predict life expectancy following TBI conditional on age at injury. Survivorship following TBI was also compared with the general population and age and gender matched non-head injured REP controls. Results 769 patients were included in complete case analyses. Median follow up time was 16.1 years (IQR 9.0–20.4) with 120 deaths occurring in the cohort during the study period. Survival after acute TBI was well represented by a Gompertz distribution. Victims of TBI surviving for at least 6 months post-injury demonstrated a much higher ongoing mortality rate compared to the US general population and non-TBI controls (hazard ratio 1·47, 95% CI 1·15–1·87). US general population cohort life table data was used to update the Gompertz model’s shape and scale parameters to account for cohort effects and allow prediction of life expectancy in contemporary TBI. Conclusions Survivors of TBI have decreased life expectancy compared to the general population. This may be secondary to the head injury itself or result from patient characteristics associated with both the propensity for TBI and increased early mortality. Post-TBI life expectancy estimates may be useful to guide prognosis, in public health planning, for actuarial applications and in the extrapolation of outcomes for TBI economic models. PMID:27165161
NASA Astrophysics Data System (ADS)
McKenna, C.; Berx, B.; Austin, W. E. N.
2016-01-01
The Faroe-Shetland Channel (FSC) is an important conduit for the poleward flow of Atlantic water towards the Nordic Seas and, as such, it plays an integral part in the Atlantic's thermohaline circulation. Mixing processes in the FSC are thought to result in an exchange of properties between the channel's inflow and outflow, with wider implications for this circulation; the nature of this mixing in the FSC is, however, uncertain. To constrain this uncertainty, we used a novel empirical method known as Parametric Optimum Multi-Parameter (POMP) analysis to objectively quantify the distribution of water masses in the channel in May 2013. This was achieved by using a combination of temperature and salinity measurements, as well as recently available nutrient and δ18O measurements. The outcomes of POMP analysis are in good agreement with established literature and demonstrate the benefits of representing all five water masses in the FSC. In particular, our results show the recirculation of Modified North Atlantic Water in the surface layers, and the pathways of Norwegian Sea Arctic Intermediate Water and Norwegian Sea Deep Water from north to south for the first time. In a final step, we apply the mixing fractions from POMP analysis to decompose the volume transport through the FSC by water mass. Despite a number of caveats, our study suggests that improved estimates of the volume transport of Atlantic inflow towards the Arctic and, thus, the associated poleward fluxes of salt and heat are possible. A new prospect to more accurately monitor the strength of the FSC branch of the thermohaline circulation emerges from this study.
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Strauch, K; Fimmers, R; Kurz, T; Deichmann, K A; Wienker, T F; Baur, M P
2000-01-01
We present two extensions to linkage analysis for genetically complex traits. The first extension allows investigators to perform parametric (LOD-score) analysis of traits caused by imprinted genes-that is, of traits showing a parent-of-origin effect. By specification of two heterozygote penetrance parameters, paternal and maternal origin of the mutation can be treated differently in terms of probability of expression of the trait. Therefore, a single-disease-locus-imprinting model includes four penetrances instead of only three. In the second extension, parametric and nonparametric linkage analysis with two trait loci is formulated for a multimarker setting, optionally taking imprinting into account. We have implemented both methods into the program GENEHUNTER. The new tools, GENEHUNTER-IMPRINTING and GENEHUNTER-TWOLOCUS, were applied to human family data for sensitization to mite allergens. The data set comprises pedigrees from England, Germany, Italy, and Portugal. With single-disease-locus-imprinting MOD-score analysis, we find several regions that show at least suggestive evidence for linkage. Most prominently, a maximum LOD score of 4.76 is obtained near D8S511, for the English population, when a model that implies complete maternal imprinting is used. Parametric two-trait-locus analysis yields a maximum LOD score of 6.09 for the German population, occurring exactly at D4S430 and D18S452. The heterogeneity model specified for analysis alludes to complete maternal imprinting at both disease loci. Altogether, our results suggest that the two novel formulations of linkage analysis provide valuable tools for genetic mapping of multifactorial traits. PMID:10796874
NASA Astrophysics Data System (ADS)
Andreadis, G. M.; Podias, A. K. M.; Tsiakaras, P. E.
In the present work, a model-based parametric analysis of the performance of a direct ethanol polymer electrolyte membrane fuel cell (DE-PEMFC) is conducted with the purpose to investigate the effect of several parameters on the cell's operation. The analysis is based on a previously validated one-dimensional mathematical model that describes the operation of a DE-PEMFC in steady state. More precisely, the effect of several operational and structural parameters on (i) the ethanol crossover rate from the anode to the cathode side of the cell, (ii) the parasitic current generation (mixed potential formation) and (iii) the total cell performance is investigated. According to the model predictions it was found that the increase of the ethanol feed concentration leads to higher ethanol crossover rates, higher parasitic currents and higher mixed potential values resulting in the decrease of the cell's power density. However there is an optimum ethanol feed concentration (approximately 1.0 mol L -1) for which the cell power density reaches its highest value. The platinum (Pt) loading of the anode and the cathode catalytic layers affects strongly the cell performance. Higher values of Pt loading of the catalytic layers increase the specific reaction surface area resulting in higher cell power densities. An increase of the anode catalyst loading compared to an equal one of the cathode catalyst loading has greater impact on the cell's power density. Another interesting finding is that increasing the diffusion layers' porosity up to a certain extent, improves the cell power density despite the fact that the parasitic current increases. This is explained by the fact that the reactants' concentrations over the catalysts are increased, leading to lower activation overpotential values, which are the main source of the total cell overpotentials. Moreover, the use of a thicker membrane leads to lower ethanol crossover rate, lower parasitic current and lower mixed potential values
Ginestet, Cedric E; Simmons, Andrew
2011-03-15
Network analysis has become a tool of choice for the study of functional and structural Magnetic Resonance Imaging (MRI) data. Little research, however, has investigated connectivity dynamics in relation to varying cognitive load. In fMRI, correlations among slow (<0.1 Hz) fluctuations of blood oxygen level dependent (BOLD) signal can be used to construct functional connectivity networks. Using an anatomical parcellation scheme, we produced undirected weighted graphs linking 90 regions of the brain representing major cortical gyri and subcortical nuclei, in a population of healthy adults (n=43). Topological changes in these networks were investigated under different conditions of a classical working memory task - the N-back paradigm. A mass-univariate approach was adopted to construct statistical parametric networks (SPNs) that reflect significant modifications in functional connectivity between N-back conditions. Our proposed method allowed the extraction of 'lost' and 'gained' functional networks, providing concise graphical summaries of whole-brain network topological changes. Robust estimates of functional networks are obtained by pooling information about edges and vertices over subjects. Graph thresholding is therefore here supplanted by inference. The analysis proceeds by firstly considering changes in weighted cost (i.e. mean between-region correlation) over the different N-back conditions and secondly comparing small-world topological measures integrated over network cost, thereby controlling for differences in mean correlation between conditions. The results are threefold: (i) functional networks in the four conditions were all found to satisfy the small-world property and cost-integrated global and local efficiency levels were approximately preserved across the different experimental conditions; (ii) weighted cost considerably decreased as working memory load increased; and (iii) subject-specific weighted costs significantly predicted behavioral
NASA Astrophysics Data System (ADS)
Csörgő, T.; Kittel, W.; Metzger, W. J.; Novák, T.
2008-05-01
A parametrization of the Bose-Einstein correlation function of pairs of identical pions produced in hadronic e+e- annihilation is proposed within the framework of a model (the τ-model) in which space-time and momentum space are very strongly correlated. Using information from the Bose-Einstein correlations as well as from single-pion spectra, it is then possible to reconstruct the space-time evolution of pion production.
Parametric sonars for seafloor characterization
NASA Astrophysics Data System (ADS)
Caiti, Andrea; Bergem, Oddbjorn; Dybedal, Johnny
1999-12-01
Parametric sonars are instruments capable of transmitting acoustic signals in the water with a very narrow beam and almost no sidelobes. These features are exploited in this paper to define a methodology for quantitative estimation of the geo-acoustic and morphological properties of the uppermost seafloor sediment layer. The three major components of the approach are the parametric instrument itself; the modelling of the forward-propagation problem, with the use of the Kirchhoff approximation for surface scattering and of the small-perturbation theory for the volume scattering; and the definition of a criterion for comparison between data and model predictions, which is accomplished by a generalized time-frequency analysis. In this way the estimation becomes one of a model-based identification, or a model-based inverse problem. Results from a field trial in a shallow water area of the Mediterranean are shown, and compared with independently gathered ground truth.
Parkin, D; Rice, N; Sutton, M
1999-08-01
Patterns of self-reported morbidity and general practitioner (GP) utilization exhibit complex age, sex and time heterogeneity. Underlying patterns are often obscured by data which are overly 'rough' because of noise associated with adjacent year fluctuations. In this paper we describe methods to obtain smoothed estimates of age, time and birth-cohort effects using data from the General Household Survey (GHS), covering the period 1984-1995/6 inclusive. The methods outlined offer powerful analytic tools to research complex profiles or trends, particularly over age or time. The relationships of the morbidity and GP utilization measures with age, sex and survey year characteristics are estimated non-parametrically using roughness penalized least squares (RPLS). A semi-parametric extension of this model is used to estimate the effect of the morbidity variables on GP utilization. Tests are employed for various forms of age and time heterogeneity including birth-cohort effects. Linear age specifications are rejected for all variables and evidence is found of time heterogeneity in one of the morbidity measures--limiting long-standing illness (LS)--and GP utilization. The advantages of employing non- and semi-parametric estimations in the presence of complex relationships such as those observed for age and time profiles are discussed. Adoption of these techniques by applied econometricians working in health economics is encouraged.
Li, Xiaotong; Santago, Anthony C; Vidt, Meghan E; Saul, Katherine R
2016-09-06
Continuous time-series data are frequently distilled into single values and analyzed using discrete statistical methods, underutilizing large datasets. Statistical parametric mapping (SPM) allows hypotheses over the entire spectrum, but consistency with discrete analyses of kinematic data is unclear. We applied SPM to evaluate effect of load and postural demands during reaching on thoracohumeral kinematics in older and young adults, and examined consistency between one-dimensional SPM and discrete analyses of the same dataset. We hypothesized that older adults would choose postures that bring the humerus anterior to the frontal plane (towards flexion) even for low demand tasks, and that SPM would reveal differences persisting over larger temporal portions of the reach. Ten healthy older (72.4±3.1yrs) and 16 young (22.9±2.5yrs) adults reached upward and forward with high and low loads. SPM and discrete t-tests were used to analyze group effects for elevation plane, elevation, and axial rotation joint angles and velocity. Older adults used more positive (anterior) elevation plane and less elevated postures to initiate and terminate reaching (p<0.008), with long duration differences during termination. When reaching upward, differences in elevation persisted over longer temporal periods at midreach for high loads (32-58% of reach) compared to low load (41-45%). SPM and discrete analyses were consistent, but SPM permitted clear identification of temporal periods over which differences persisted, while discrete methods allowed analysis of extracted values, like ROM. This work highlights the utility of SPM to analyze kinematics time series data, and emphasizes importance of task selection when assessing age-related changes in movement. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computation of Parametric Adaptive Fuzzy Controller for Wood Drying System
NASA Astrophysics Data System (ADS)
Situmorang, Zakarias; Wardoyo, Retantyo; Hartati, Sri; Istiyanto, Jazi Eko
2009-08-01
The paper reports the computation of parametric adaptive fuzzy controller for used to wood drying system. Parametric of adaptive fuzzy controller is control period system. Control period system is how long time need to hoist of temperature drying or humidity drying if the actuator in on-conditions. The parametric is implemented for control system of wood drying process at prototype chamber with solar is source of energy. The actuator of system is heater, damper and sprayer. From result of measurement, that data were doing to analysis statistic to have the parametric. Whenever the parametric want to implemented with mechanism adaptive. Membership Functions of variable control of system to became something is difficult to have effect to temperature and humidity drying. The result of implemented of adaptive fuzzy control is described in graphic typical. The control system is able to adapt change of humidity drying in system schedule of wood drying system.
Brace, Christopher L
2011-07-01
Design and validate an efficient dual-slot coaxial microwave ablation antenna that produces an approximately spherical heating pattern to match the shape of most abdominal and pulmonary tumor targets. A dual-slot antenna geometry was utilized for this study. Permutations of the antenna geometry using proximal and distal slot widths from 1 to 10 mm separated by 1-20 mm were analyzed using finite-element electromagnetic simulations. From this series, the most optimal antenna geometry was selected using a two-term sigmoidal objective function to minimize antenna reflection coefficient and maximize the diameter-to-length aspect ratio of heat generation. Sensitivities to variations in tissue properties and insertion depth were also evaluated in numerical models. The most optimal dual-slot geometry of the parametric analysis was then fabricated from semirigid coaxial cable. Antenna reflection coefficients at various insertion depths were recorded in ex vivo bovine livers and compared to numerical results. Ablation zones were then created by applying 50 W for 2-10 min in simulations and ex vivo livers. Mean zone diameter, length, aspect ratio, and reflection coefficients before and after heating were then compared to a conventional monopole antenna using ANOVA with post-hoc t-tests. Statistical significance was indicated for P <0.05. Antenna performance was highly sensitive to dual-slot geometry. The best-performing designs utilized a proximal slot width of 1 mm, distal slot width of 4 mm +/- 1 mm and separation of 8 mm +/- 1 mm. These designs were characterized by an active choking mechanism that focused heating to the distal tip of the antenna. A dual-band resonance was observed in the most optimal design, with a minimum reflection coefficient of -20.9 dB at 2.45 and 1.25 GHz. Total operating bandwidth was greater than 1 GHz, but the desired heating pattern was achieved only near 2.45 GHz. As a result, antenna performance was robust to changes in insertion depth and
Dang, Wei; Mao, Pengcheng; Weng, Yuxiang
2013-07-01
We report an improved setup of femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) with a 210 fs temporal response. The system employs a Cassegrain objective to collect and focus fluorescence photons, which eliminates the interference from the coherent photons in the fluorescence amplification by temporal separation of the coherent photons and the fluorescence photons. The gain factor of the Cassegrain objective-assisted FNOPAS is characterized as 1.24 × 10(5) for Rhodamine 6G. Spectral corrections have been performed on the transient fluorescence spectra of Rhodamine 6G and Rhodamine 640 in ethanol by using an intrinsic calibration curve derived from the spectrum of superfluorescence, which is generated from the amplification of the vacuum quantum noise. The validity of spectral correction is illustrated by comparisons of spectral shape and peak wavelength between the corrected transient fluorescence spectra of these two dyes acquired by FNOPAS and their corresponding standard reference spectra collected by the commercial streak camera. The transient fluorescence spectra of the Rhodamine 6G were acquired in an optimized phase match condition, which gives a deviation in the peak wavelength between the retrieved spectrum and the reference spectrum of 1.0 nm, while those of Rhodamine 640 were collected in a non-optimized phase match condition, leading to a deviation in a range of 1.0-3.0 nm. Our results indicate that the improved FNOPAS can be a reliable tool in the measurement of transient fluorescence spectrum for its high temporal resolution and faithfully corrected spectrum.
Zhu, Yuankai; Feng, Jianhua; Wu, Shuang; Hou, Haifeng; Ji, Jianfeng; Zhang, Kai; Chen, Qing; Chen, Lin; Cheng, Haiying; Gao, Liuyan; Chen, Zexin; Zhang, Hong; Tian, Mei
2017-08-01
PET with (18)F-FDG has been used for presurgical localization of epileptogenic foci; however, in nonsurgical patients, the correlation between cerebral glucose metabolism and clinical severity has not been fully understood. The aim of this study was to evaluate the glucose metabolic profile using (18)F-FDG PET/CT imaging in patients with epilepsy. Methods: One hundred pediatric epilepsy patients who underwent (18)F-FDG PET/CT, MRI, and electroencephalography examinations were included. Fifteen age-matched controls were also included. (18)F-FDG PET images were analyzed by visual assessment combined with statistical parametric mapping (SPM) analysis. The absolute asymmetry index (|AI|) was calculated in patients with regional abnormal glucose metabolism. Results: Visual assessment combined with SPM analysis of (18)F-FDG PET images detected more patients with abnormal glucose metabolism than visual assessment only. The |AI| significantly positively correlated with seizure frequency (P < 0.01) but negatively correlated with the time since last seizure (P < 0.01) in patients with abnormal glucose metabolism. The only significant contributing variable to the |AI| was the time since last seizure, in patients both with hypometabolism (P = 0.001) and with hypermetabolism (P = 0.005). For patients with either hypometabolism (P < 0.01) or hypermetabolism (P = 0.209), higher |AI| values were found in those with drug resistance than with seizure remission. In the post-1-y follow-up PET studies, a significant change of |AI| (%) was found in patients with clinical improvement compared with those with persistence or progression (P < 0.01). Conclusion:(18)F-FDG PET imaging with visual assessment combined with SPM analysis could provide cerebral glucose metabolic profiles in nonsurgical epilepsy patients. |AI| might be used for evaluation of clinical severity and progress in these patients. Patients with a prolonged period of seizure freedom may have more subtle (or no) metabolic
Entanglement analysis of two-mode Gaussian states in a parametric down-converter
NASA Astrophysics Data System (ADS)
Tahira, Rabia; Ge, Guoqin; Ikram, Manzoor
2017-04-01
Parametric down-conversion has been studied as a source of entangled radiation (Lee et al 2008 J. Phys. B: At. Mol. Opt. Phys. 41 145504). We investigate and quantify the entanglement of this system when the initial cavity modes are taken as two-mode Gaussian states. We study the effect of nonclassicality, purity, noise and leakage through the cavity modes on the two-mode Gaussian state entanglement.
NASA Astrophysics Data System (ADS)
Guoyan, Liu; Kun, Gao; Xuefeng, Liu; Guoqiang, Ni
2016-10-01
We report the simulation and measurement results of near field spatial scattering spectra around nanoparticles. Our measurement and simulations results have indicated that Parametric Indirect Microscopic Imaging can image the near field spatial scattering to a much larger distance from the scattering source of the particle under measurement whereas this part of spatial scattering was lost in the conventional microscopy. Both FDTD modeling and measurement provided evidence that parameters of indirect optical wave vector have higher sensitivity to near field scattering.
Vieira, Marcus Fraga; de Brito, Ademir Alves; Lehnen, Georgia Cristina; Rodrigues, Fábio Barbosa
2017-02-27
This study analyzed gait initiation (GI) on inclined surfaces with 68 young adult subjects of both sexes. Ground reaction forces and moments were collected using two AMTI force platforms, of which one was in a horizontal position and the other was inclined by 8% in relation to the horizontal plane. Departing from a standing position, each participant executed three trials in the following conditions: horizontal position (HOR), inclined position at ankle dorsi-flexion (UP), and inclined position at ankle plantar-flexion (DOWN). Statistical parametric mapping analysis was performed over the entire center of pressure (COP) and center of mass (COM) time series. COP excursion did not show significant differences in the medial-lateral (ML) direction in both inclined conditions, but it was greater in the anterior-posterior (AP) direction for both inclined conditions. COP velocities are smaller in discrete portions of GI for the UP and DOWN conditions. COM displacement was greater in the ML direction during anticipatory postural adjustments (APA) in the UP condition, and COM moves faster in the ML direction during APA in the UP condition but slower at the end of GI for both the UP and the DOWN conditions. The COP-COM vector showed a greater angle in the DOWN condition. We observed changes for COP and COM in GI in both the UP and the DOWN conditions, with the latter showing changes for a great extent of the task. Both the UP and the DOWN conditions showed increased COM displacement and velocity. The predominant characteristic during GI on inclined surfaces, including APA, appears to be the displacement of the COM.
Neelakantan, S; Veng-Pedersen, P
2005-11-01
A novel numerical deconvolution method is presented that enables the estimation of drug absorption rates under time-variant disposition conditions. The method involves two components. (1) A disposition decomposition-recomposition (DDR) enabling exact changes in the unit impulse response (UIR) to be constructed based on centrally based clearance changes iteratively determined. (2) A non-parametric, end-constrained cubic spline (ECS) input response function estimated by cross-validation. The proposed DDR-ECS method compensates for disposition changes between the test and the reference administrations by using a "beta" clearance correction based on DDR analysis. The representation of the input response by the ECS method takes into consideration the complex absorption process and also ensures physiologically realistic approximations of the response. The stability of the new method to noisy data was evaluated by comprehensive simulations that considered different UIRs, various input functions, clearance changes and a novel scaling of the input function that includes the "flip-flop" absorption phenomena. The simulated input response was also analysed by two other methods and all three methods were compared for their relative performances. The DDR-ECS method provides better estimation of the input profile under significant clearance changes but tends to overestimate the input when there were only small changes in the clearance.
Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony
2009-01-01
Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.
NASA Astrophysics Data System (ADS)
Sharma, Raghav; Sisodia, Naveen; Dürrenfeld, Philipp; Åkerman, Johan; Muduli, Pranaba Kishor
2017-07-01
We report on time-domain stability of the parametric synchronization in a spin-torque nano-oscillator (STNO) based on a magnetic tunnel junction. Time-domain measurements of the instantaneous frequency (fi) of a parametrically synchronized STNO show random short-term unlocking of the STNO signal for low injected radio-frequency (RF) power, which cannot be revealed in time-averaged frequency domain measurements. Macrospin simulations reproduce the experimental results and reveal that the random unlocking during synchronization is driven by thermal fluctuations. We show that by using a high injected RF power, random unlocking of the STNO can be avoided. However, a perfect synchronization characterized by complete suppression of phase noise, so-called phase noise squeezing, can be obtained only at a significantly higher RF power. Our macrospin simulations suggest that a lower temperature and a higher positive ratio of the fieldlike torque to the spin transfer torque reduce the threshold RF power required for phase noise squeezing under parametric synchronization.
Parametric Identification of Systems Via Linear Operators.
1978-09-01
A general parametric identification /approximation model is developed for the black box identification of linear time invariant systems in terms of... parametric identification techniques derive from the general model as special cases associated with a particular linear operator. Some possible
Plodinec, M.J.
1998-11-20
After being filled with glass, DWPF canistered waste forms will be welded closed using an upset resistance welding process. This final closure weld must be leaktight, and must remain so during extended storage at SRS. As part of the DWPF Startup Test Program, a parametric study (DWPF-WP-24) has been performed to determine a range of welder operating parameters which will produce acceptable welds. The parametric window of acceptable welds defined by this study is 90,000 + 15,000 lb of force, 248,000 + 22,000 amps of current, and 95 + 15 cycles* for the time of application of the current.
Eberhard, B.J.; Harbour, J.R.; Plodinec, M.J.
1994-06-01
As part of the DWPF Startup Test Program, a parametric study has been performed to determine a range of welder operating parameters which will produce acceptable final welds for canistered waste forms. The parametric window of acceptable welds defined by this study is 90,000 {plus_minus} 15,000 lb of force, 248,000 {plus_minus} 22,000 amps of current, and 95 {plus_minus} 15 cycles (@ 60 cops) for the time of application of the current.
Parametric Interpretation in Yorktalk.
ERIC Educational Resources Information Center
Ogden, Richard
The method of parametric interpretation used in the computer program "Yorktalk," software that creates synthetic parameter files from phonological representations of speech, is explained. First, the design of the program is described, and the concept of "exponency" in prosodic analysis is explained as it is applied in the…
NASA Astrophysics Data System (ADS)
Safaei, Mohsen; Meneghini, R. Michael; Anton, Steven R.
2017-09-01
Total knee arthroplasty is a common procedure in the United States; it has been estimated that about 4 million people are currently living with primary knee replacement in this country. Despite huge improvements in material properties, implant design, and surgical techniques, some implants fail a few years after surgery. A lack of information about in vivo kinetics of the knee prevents the establishment of a correlated intra- and postoperative loading pattern in knee implants. In this study, a conceptual design of an ultra high molecular weight (UHMW) knee bearing with embedded piezoelectric transducers is proposed, which is able to measure the reaction forces from knee motion as well as harvest energy to power embedded electronics. A simplified geometry consisting of a disk of UHMW with a single embedded piezoelectric ceramic is used in this work to study the general parametric trends of an instrumented knee bearing. A combined finite element and electromechanical modeling framework is employed to investigate the fatigue behavior of the instrumented bearing and the electromechanical performance of the embedded piezoelectric. The model is validated through experimental testing and utilized for further parametric studies. Parametric studies consist of the investigation of the effects of several dimensional and piezoelectric material parameters on the durability of the bearing and electrical output of the transducers. Among all the parameters, it is shown that adding large fillet radii results in noticeable improvement in the fatigue life of the bearing. Additionally, the design is highly sensitive to the depth of piezoelectric pocket. Finally, using PZT-5H piezoceramics, higher voltage and slightly enhanced fatigue life is achieved.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Non-parametric trend analysis of water quality data of rivers in Kansas
Yu, Y.-S.; Zou, S.; Whittemore, D.
1993-01-01
Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous. ?? 1993.
Arisholm, Gunnar
2007-05-14
Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.
Not Available
1983-02-01
Parametric analyses are described of an alcohol fuels plant producing 50 million gal/y of ethanol by the high temperature dilute acid hydrolysis of aspen wood or corn stover. Analyses were carried out using a computer simulation. The simulation performs material and energy balances, estimates capital and operating costs, and calculates the selling price of ethanol. Pretretments and delignification are shown to be justified only if the value of lignin is greater than $0.40/lb. Sensitivity analyses determine the effect of hydrolysis conditions on yield and selling price. Sugar concentration prior to fermentation is shown not to be justified.
Parametric Analysis of Cyclic Phase Change and Energy Storage in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Hall, Carsie A., III; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.
1997-01-01
A parametric study on cyclic melting and freezing of an encapsulated phase change material (PCM), integrated into a solar heat receiver, has been performed. The cyclic nature of the present melt/freeze problem is relevant to latent heat thermal energy storage (LHTES) systems used to power solar Brayton engines in microgravity environments. Specifically, a physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) project was developed. Multi-conjugate effects such as the convective fluid flow of a low-Prandtl-number fluid, coupled with thermal conduction in the phase change material, containment tube and working fluid conduit were accounted for in the model. A single-band thermal radiation model was also included to quantify reradiative energy exchange inside the receiver and losses through the aperture. The eutectic LiF-CaF2 was used as the phase change material (PCM) and a mixture of He/Xe was used as the working fluid coolant. A modified version of the computer code HOTTube was used to generate results in the two-phase regime. Results indicate that parametric changes in receiver gas inlet temperature and receiver heat input effects higher sensitivity to changes in receiver gas exit temperatures.
NASA Astrophysics Data System (ADS)
Cosmidis, J.; Heggy, E.; Clifford, S. M.
2007-12-01
Laboratory dielectric characterizations of Ice-dust mixtures are crucial for the quantitative analysis of radar sounding data as for the case of the MARSIS and SHARAD experiments. Understanding the range of the dielectric properties of the Martian northmen Polar layer deposits as well as their geographical an vertical distribution result in a better topographical mapping of the basement material below the northern polar cap and help constrain the ambiguities on the identification of layering and any potential subglaciar melting. In order to achieve this task, we constructed first order modeled maps of the surface dielectric properties oh the NPLD. We first used the recent Mars Global Surveyor Thermal Emission Spectrometer (TES) thermal inertia observations in order to derive a map of the dust mass fraction in the ice at the top of the permanent cap. Then we used parametric laboratory measurements of the dielectric properties of Martian polar ice analogs with various temperatures, radar frequencies and mass fractions and compositions of dust in order to obtain the parametric dielectric maps. Thermal inertia maps have been derived from recent TES observations of the surface temperatures of Mars taken over three Mars-years from orbit 1583 to 24346. Laboratory dielectric characterization of ice-dust mixtures has been performed using TES dust calibration samples provided by the ARES group at NASA JSC. Our Maps suggest that surface dielectric properties of the northern Polar cap ranges from 2.72 to 3.23 in the 2-20 MHz band for a dust inclusion typical of Martian basalt. Parametric maps of loss tangent, penetration depth for several dust types will be presented at the conference.
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-15
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300–1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10{sup −5}M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.
NASA Astrophysics Data System (ADS)
Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang
2015-12-01
Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300-1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10-5M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.
NASA Technical Reports Server (NTRS)
To, Wing H.
2005-01-01
Quantum optical experiments require all the components involved to be extremely stable relative to each other. The stability can be "measured" by using an interferometric experiment. A pair of coherent photons produced by parametric down-conversion could be chosen to be orthogonally polarized initially. By rotating the polarization of one of the wave packets, they can be recombined at a beam splitter such that interference will occur. Theoretically, the interference will create four terms in the wave function. Two terms with both photons going to the same detector, and two terms will have the photons each going to different detectors. However, the latter will cancel each other out, thus no photons will arrive at the two detectors simultaneously under ideal conditions. The stability Of the test-bed can then be inferred by the dependence of coincidence count on the rotation angle.
A parametric study of supersonic laminar flow for swept wings using linear stability analysis
NASA Technical Reports Server (NTRS)
Cummings, Russell M.; Garcia, Joseph A.; Tu, Eugene L.
1995-01-01
A parametric study to predict the extent of laminar flow on the upper surface of a generic swept-back wing (NACA 64A010 airfoil section) at supersonic speeds was conducted. The results were obtained by using surface pressure predictions from an Euler/Navier-Stokes computational fluid dynamics code coupled with a boundary layer code, which predicts detailed boundary layer profiles, and finally with a linear stability code to determine the extent of laminar flow. The parameters addressed are Reynolds number, angle of attack, and leading-edge wing sweep. The results of this study show that an increase in angle of attack, for specific Reynolds numbers, can actually delay transition. Therefore, higher lift capability, caused by the increased angle of attack, as well as a reduction in viscous drag due to the delay in transition is possible for certain flight conditions.
NASA Technical Reports Server (NTRS)
To, Wing H.
2005-01-01
Quantum optical experiments require all the components involved to be extremely stable relative to each other. The stability can be "measured" by using an interferometric experiment. A pair of coherent photons produced by parametric down-conversion could be chosen to be orthogonally polarized initially. By rotating the polarization of one of the wave packets, they can be recombined at a beam splitter such that interference will occur. Theoretically, the interference will create four terms in the wave function. Two terms with both photons going to the same detector, and two terms will have the photons each going to different detectors. However, the latter will cancel each other out, thus no photons will arrive at the two detectors simultaneously under ideal conditions. The stability Of the test-bed can then be inferred by the dependence of coincidence count on the rotation angle.
Open cycle OTEC thermal-hydraulic systems analysis and parametric studies
NASA Astrophysics Data System (ADS)
Patsons, B.; Bharathan, D.; Althof, J.
1984-06-01
An analytic thermohydraulic systems model of the power cycle an seawater supply systems for an open cycle ocean thermal energy conversion (OTEC) plant has been developed that allows ready examination of the effects of system and component operating points on plant size and parasitic power requirements. This paper presents the results of three parametric studies on the effects of system temperature distribution, plant gross electric capacity, and the allowable seawater velocity in the supply and discharge pipes. The paper also briefly discusses the assumptions and equations used in the model and the state-of-the-art component limitations. The model provides a useful tool for an OTEC plant designer to evaluate system trade-offs and define component interactions and performance.
Parametric modeling for quantitative analysis of pulmonary structure to function relationships
NASA Astrophysics Data System (ADS)
Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.
2005-04-01
While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.
Binder, T; Garbe, C S; Wagenbach, D; Freitag, J; Kipfstuhl, S
2013-05-01
Microstructure analysis of polar ice cores is vital to understand the processes controlling the flow of polar ice on the microscale. This paper presents an automatic image processing framework for extraction and parametrization of grain boundary networks from images of the NEEM deep ice core. As cross-section images are acquired using controlled surface sublimation, grain boundaries and air inclusions appear dark, whereas the inside of grains appears grey. The initial segmentation step of the software is to separate possible boundaries of grains and air inclusions from background. A Machine learning approach is utilized to gain automatic, reliable classification, which is required for processing large data sets along deep ice cores. The second step is to compose the perimeter of section profiles of grains by planar sections of the grain surface between triple points. Ultimately, grain areas, grain boundaries and triple junctions of the later are diversely parametrized. High resolution is achieved, so that small grain sizes and local curvatures of grain boundaries can systematically be investigated. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Parametric Model Checking with VerICS
NASA Astrophysics Data System (ADS)
Knapik, Michał; Niewiadomski, Artur; Penczek, Wojciech; Półrola, Agata; Szreter, Maciej; Zbrzezny, Andrzej
The paper presents the verification system verICS, extended with the three new modules aimed at parametric verification of Elementary Net Systems, Distributed Time Petri Nets, and a subset of UML. All the modules exploit Bounded Model Checking for verifying parametric reachability and the properties specified in the logic PRTECTL - the parametric extension of the existential fragment of CTL.
NASA Astrophysics Data System (ADS)
Howe, R.; Basu, S.; Davies, G. R.; Ball, W. H.; Chaplin, W. J.; Elsworth, Y.; Komm, R.
2017-02-01
The solar-cycle variation of acoustic mode frequencies has a frequency dependence related to the inverse mode inertia. The discrepancy between model predictions and measured oscillation frequencies for solar and solar-type stellar acoustic modes includes a significant frequency-dependent term known as the surface term, which is also related to the inverse mode inertia. We parametrize both the surface term and the frequency variations for low-degree solar data from Birmingham Solar-Oscillations Network (BiSON) and medium-degree data from the Global Oscillations Network Group (GONG) using the mode inertia together with cubic and inverse frequency terms. We find that for the central frequency of rotationally split multiplets, the cubic term dominates both the average surface term and the temporal variation, but for the medium-degree case, the inverse term improves the fit to the temporal variation. We also examine the variation of the even-order splitting coefficients for the medium-degree data and find that, as for the central frequency, the latitude-dependent frequency variation, which reflects the changing latitudinal distribution of magnetic activity over the solar cycle, can be described by the combination of a cubic and an inverse function of frequency scaled by inverse mode inertia. The results suggest that this simple parametrization could be used to assess the activity-related frequency variation in solar-like asteroseismic targets.
Hu, Leland S.; Ning, Shuluo; Eschbacher, Jennifer M.; Gaw, Nathan; Dueck, Amylou C.; Smith, Kris A.; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J.; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O’Neill, Brian P.; Elmquist, William; Baxter, Leslie C.; Gao, Fei; Frakes, David; Karis, John P.; Zwart, Christine; Swanson, Kristin R.; Sarkaria, Jann; Wu, Teresa
2015-01-01
Background Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. Methods We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. Results We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Conclusion Multi-parametric MRI and texture analysis can help characterize and visualize GBM’s spatial histologic heterogeneity to identify regional tumor-rich biopsy targets. PMID:26599106
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.; Yool, A.
2014-09-01
Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to compensate for missing biological complexity. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA) coupled with a widely-used global ocean model (NEMO). A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of target model output. In general, chlorophyll records at a representative array of oceanic sites
NASA Astrophysics Data System (ADS)
Lecomte, E.; Le Pourhiet, L.; Lacombe, O.; Jolivet, L.
2009-04-01
Widespread occurrences of low angle normal faults have been described within the extending continental crust since their discovery in the Basin and Range province. Although a number of field observations suggest that sliding may occur at very shallow dip in the brittle field, the seismic activity related to such normal faults is nearly inexistent and agrees with the locking angle of 30° predicted from Andersonian fault mechanics associated with Byerlee's law. To understand this apparent contradiction, we have introduced Mohr Coulomb plastic flow rule within the inherited low-angle faults where former studies were limited to a yield criterion. The fault is considered as a pre existing compacting or dilating plane with a shallow dip (0-45°) embedded in a brittle media. Following Anderson's theory, we assume that the maximal principal stress is vertical and equal to the lithostatic pressure. This approximation may not be true for small faults but it holds for large detachment faults where associated joints are generally vertical. With this model, we can predict not only whether new brittle features forms in the surrounding of the low angle normal faults but also the complete stress-strain evolution both within the faults and in its surrounding. Moreover, the introduction of a flow rule within the fault allows brittle strain to occur on very badly oriented faults (dip < 30°) before yielding occurs in the surrounding medium. After performing a full parametric study, we find that the reactivation of low angle normal faults depends primarily on the friction angle of the fault material and the ratio of the cohesion between the shear band and its surrounding. Our model is therefore in good agreement with previous simpler models, and the locking angles obtained differ in most cases by only 2 or 3° from previous yield criteria-based approaches which did explain most of the data especially the repartition of focal mechanisms worldwide. However, we find that in some cases
NASA Astrophysics Data System (ADS)
Alexander, Rafael N.; Wang, Pei; Sridhar, Niranjan; Chen, Moran; Pfister, Olivier; Menicucci, Nicolas C.
2016-09-01
One-way quantum computing is experimentally appealing because it requires only local measurements on an entangled resource called a cluster state. Record-size, but nonuniversal, continuous-variable cluster states were recently demonstrated separately in the time and frequency domains. We propose to combine these approaches into a scalable architecture in which a single optical parametric oscillator and simple interferometer entangle up to (3 ×103 frequencies) × (unlimited number of temporal modes) into a computationally universal continuous-variable cluster state. We introduce a generalized measurement protocol to enable improved computational performance on this entanglement resource.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration
Hwang, Eunjoo; Hu, Jingwen; Chen, Cong; Klein, Katelyn F; Miller, Carl S; Reed, Matthew P; Rupp, Jonathan D; Hallman, Jason J
2016-11-01
Occupant stature and body shape may have significant effects on injury risks in motor vehicle crashes, but the current finite element (FE) human body models (HBMs) only represent occupants with a few sizes and shapes. Our recent studies have demonstrated that, by using a mesh morphing method, parametric FE HBMs can be rapidly developed for representing a diverse population. However, the biofidelity of those models across a wide range of human attributes has not been established. Therefore, the objectives of this study are 1) to evaluate the accuracy of HBMs considering subject-specific geometry information, and 2) to apply the parametric HBMs in a sensitivity analysis for identifying the specific parameters affecting body responses in side impact conditions. Four side-impact tests with two male post-mortem human subjects (PMHSs) were selected to evaluate the accuracy of the geometry and impact responses of the morphed HBMs. For each PMHS test, three HBMs were simulated to compare with the test results: the original Total Human Model for Safety (THUMS) v4.01 (O-THUMS), a parametric THUMS (P-THUMS), and a subject-specific THUMS (S-THUMS). The P-THUMS geometry was predicted from only age, sex, stature, and BMI using our statistical geometry models of skeleton and body shape, while the S-THUMS geometry was based on each PMHS's CT data. The simulation results showed a preliminary trend that the correlations between the PTHUMS- predicted impact responses and the four PMHS tests (mean-CORA: 0.84, 0.78, 0.69, 0.70) were better than those between the O-THUMS and the normalized PMHS responses (mean-CORA: 0.74, 0.72, 0.55, 0.63), while they are similar to the correlations between S-THUMS and the PMHS tests (mean-CORA: 0.85, 0.85, 0.67, 0.72). The sensitivity analysis using the PTHUMS showed that, in side impact conditions, the HBM skeleton and body shape geometries as well as the body posture were more important in modeling the occupant impact responses than the bone and soft
NASA Astrophysics Data System (ADS)
Peng, Bo; Kowalski, Karol
2016-12-01
In this paper we derive basic properties of the Green's-function matrix elements stemming from the exponential coupled-cluster (CC) parametrization of the ground-state wave function. We demonstrate that all intermediates used to express the retarded (or, equivalently, ionized) part of the Green's function in the ω representation can be expressed only through connected diagrams. Similar properties are also shared by the first-order ω derivative of the retarded part of the CC Green's function. Moreover, the first-order ω derivative of the CC Green's function can be evaluated analytically. This result can be generalized to any order of ω derivatives. Through the Dyson equation, derivatives of the corresponding CC self-energy operator can be evaluated analytically. In analogy to the CC Green's function, the corresponding CC self-energy operator can be represented by connected terms. Our analysis can easily be generalized to the advanced part of the CC Green's function.
NASA Astrophysics Data System (ADS)
Wang, Ning; Li, Yongmin
2016-01-01
We developed a quantum analysis of the nondegenerate optical parametric oscillator (NOPO) with unequally injected signal and idler. Both the steady-state output field and the two-mode quantum correlation spectrum are investigated under the condition of different injected idler-to-signal ratios (ISRs) and the relative phase between the pump and the injected seed. It is found that when the seed is injected through the output coupler, the NOPO allows for the robust generation of two-mode quantum entanglement even if the relative phase is free running and the ISR is as high as 0.7. At the specific relative phase of zero, a high degree of entanglement can exist across a whole range of ISRs. An experimental study of the NOPO with unequal seeds is presented, and the observed results verify the theoretical predictions.
NASA Astrophysics Data System (ADS)
Silveri, M.; Zalys-Geller, E.; Hatridge, M.; Leghtas, Z.; Devoret, M. H.; Girvin, S. M.
2015-03-01
In the remote entanglement process, two distant stationary qubits are entangled with separate flying qubits and the which-path information is erased from the flying qubits by interference effects. As a result, an observer cannot tell from which of the two sources a signal came and the probabilistic measurement process generates perfect heralded entanglement between the two signal sources. Notably, the two stationary qubits are spatially separated and there is no direct interaction between them. We study two transmon qubits in superconducting cavities connected to a Josephson Parametric Converter (JPC). The qubit information is encoded in the traveling wave leaking out from each cavity. Remarkably, the quantum-limited phase-preserving amplification of two traveling waves provided by the JPC can work as a which-path information eraser. By using a stochastic master approach we demonstrate the probabilistic production of heralded entangled states and that unequal qubit-cavity pairs can be made indistinguishable by simple engineering of driving fields. Additionally, we will derive measurement rates, measurement optimization strategies and discuss the effects of finite amplification gain, cavity losses, and qubit relaxations and dephasing. Work supported by IARPA, ARO and NSF.
NASA Technical Reports Server (NTRS)
Masunaga, Hirohiko; Kummerow, Christian D.
2005-01-01
A methodology to analyze precipitation profiles using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and precipitation radar (PR) is proposed. Rainfall profiles are retrieved from PR measurements, defined as the best-fit solution selected from precalculated profiles by cloud-resolving models (CRMs), under explicitly defined assumptions of drop size distribution (DSD) and ice hydrometeor models. The PR path-integrated attenuation (PIA), where available, is further used to adjust DSD in a manner that is similar to the PR operational algorithm. Combined with the TMI-retrieved nonraining geophysical parameters, the three-dimensional structure of the geophysical parameters is obtained across the satellite-observed domains. Microwave brightness temperatures are then computed for a comparison with TMI observations to examine if the radar-retrieved rainfall is consistent in the radiometric measurement space. The inconsistency in microwave brightness temperatures is reduced by iterating the retrieval procedure with updated assumptions of the DSD and ice-density models. The proposed methodology is expected to refine the a priori rain profile database and error models for use by parametric passive microwave algorithms, aimed at the Global Precipitation Measurement (GPM) mission, as well as a future TRMM algorithms.
Parametric analysis of mechanically driven compositional patterning in SiGe substrates
NASA Astrophysics Data System (ADS)
Kaiser, Daniel; Han, Sang M.; Sinno, Talid
2017-02-01
A recently demonstrated approach for creating structured compositional gradients in the near-surface region of SiGe substrates is studied parametrically using a multiresolution coarse-grained lattice kinetic Monte Carlo simulation method. In the "stress patterning" process, a patterned elastic stress field is generated in the SiGe substrate by pressing an array of micro-indenters into it. The stressed substrate is then thermally annealed to drive the atomic diffusion in which the larger Ge atoms are pushed away from the areas of compressive stress. By varying a subset of the parameters that characterize the high-dimensional input space of the process (e.g., indenter spacing, indenter tip shape, and indenter array symmetry) we show that technologically interesting compositional configurations may be readily generated. In particular, we show that it is theoretically possible to generate arrays of well-delineated nanoscale regions of high Ge content surrounded by essentially pure Si. Such configurations may be useful as Ge "quantum dots" that exhibit three-dimensional quantum confinement, which have otherwise been very challenging to create with high degrees of size and spatial uniformity. These simulation results will be instrumental in guiding future experimental demonstrations of stress patterning.
NASA Astrophysics Data System (ADS)
Hartwig, Jason; Adin Mann, Jay; Darr, Samuel R.
2014-09-01
This paper presents the parametric investigation of the factors which govern screen channel liquid acquisition device bubble point pressure in a low pressure propellant tank. The five test parameters that were varied included the screen mesh, liquid cryogen, liquid temperature and pressure, and type of pressurant gas. Bubble point data was collected using three fine mesh 304 stainless steel screens in two different liquids (hydrogen and nitrogen), over a broad range of liquid temperatures and pressures in subcooled and saturated liquid states, using both a noncondensible (helium) and autogenous (hydrogen or nitrogen) gas pressurization scheme. Bubble point pressure scales linearly with surface tension, but does not scale inversely with the fineness of the mesh. Bubble point pressure increases proportional to the degree of subcooling. Higher bubble points are obtained using noncondensible pressurant gases over the condensable vapor. The bubble point model is refined using a temperature dependent pore diameter of the screen to account for screen shrinkage at reduced liquid temperatures and to account for relative differences in performance between the two pressurization schemes. The updated bubble point model can be used to accurately predict performance of LADs operating in future cryogenic propellant engines and cryogenic fuel depots.
Cabras, Stefano; Castellanos, Maria Eugenia; Perra, Silvia
2014-11-20
This paper considers the problem of selecting a set of regressors when the response variable is distributed according to a specified parametric model and observations are censored. Under a Bayesian perspective, the most widely used tools are Bayes factors (BFs), which are undefined when improper priors are used. In order to overcome this issue, fractional (FBF) and intrinsic (IBF) BFs have become common tools for model selection. Both depend on the size, Nt , of a minimal training sample (MTS), while the IBF also depends on the specific MTS used. In the case of regression with censored data, the definition of an MTS is problematic because only uncensored data allow to turn the improper prior into a proper posterior and also because full exploration of the space of the MTSs, which includes also censored observations, is needed to avoid bias in model selection. To address this concern, a sequential MTS was proposed, but it has the drawback of an increase of the number of possible MTSs as Nt becomes random. For this reason, we explore the behaviour of the FBF, contextualizing its definition to censored data. We show that these are consistent, providing also the corresponding fractional prior. Finally, a large simulation study and an application to real data are used to compare IBF, FBF and the well-known Bayesian information criterion.
NASA Technical Reports Server (NTRS)
Housner, J. M.; Stein, M.
1975-01-01
A computer program is presented which was developed for the combined compression and shear of stiffened variable thickness orthotropic composite panels on discrete springs: boundary conditions are general and include elastic boundary restraints. Buckling solutions are obtained by using a newly developed trigonometric finite difference procedure which improves the solution convergence rate over conventional finite difference methods. The classical general shear buckling results which exist only for simply supported panels over a limited range of orthotropic properties, were extended to the complete range of these properties for simply supported panels and, in addition, to the complete range of orthotropic properties for clamped panels. The program was also applied to parametric studies which examine the effect of filament orientation upon the buckling of graphite-epoxy panels. These studies included an examination of the filament orientations which yield maximum shear or compressive buckling strength for panels having all four edges simply supported or clamped over a wide range of aspect ratios. Panels with such orientations had higher buckling loads than comparable, equal weight, thin skinned aluminum panels. Also included among the parameter studies were examinations of combined axial compression and shear buckling and examinations of panels with rotational elastic edge restraints.
Multi-parametric heart rate analysis in premature babies exposed to sudden infant death syndrome.
Lucchini, Maristella; Signorini, Maria G; Fifer, William P; Sahni, Rakhesh
2014-01-01
Severe premature babies present a risk profile higher than the normal population. Reasons are related to the incomplete development of physiological systems that support baby's life. Heart Rate Variability (HRV) analysis can help the identification of distress conditions as it is sensitive to Autonomic Nervous System (ANS) behavior. This paper presents results obtained in 35 babies with severe prematurity, in quiet and active sleep and in prone and supine position. HRV was analyzed in time and frequency domain and with nonlinear parameters. The novelty of this approach lies in the combined use of parameters generally adopted in fetal monitoring and "adult" indices. Results show that most parameters succeed in classifying different experimental conditions. This is very promising as our final objective is to identify a set of parameters that could be the basis for a risk classifier to improve the care path of premature population.
Parametric and non-parametric estimation of speech formants: application to infant cry.
Fort, A; Ismaelli, A; Manfredi, C; Bruscaglioni, P
1996-12-01
The present paper addresses the issue of correctly estimating the peaks in the speech envelope (formants) occurring in newborn infant cry. Clinical studies have shown that the analysis of such spectral characteristics is a helpful noninvasive diagnostic tool. In fact it can be applied to explore brain function at very early stage of child development, for a timely diagnosis of neonatal disease and malformation. The paper focuses on the performance comparison between some classical parametric and non-parametric estimation techniques particularly well suited for the present application, specifically the LP, ARX and cepstrum approaches. It is shown that, if the model order is correctly chosen, parametric methods are in general more reliable and robust against noise, but exhibit a less uniform behaviour than cepstrum. The methods are compared also in terms of tracking capability, since the signals under study are nonstationary. Both simulated and real signals are used in order to outline the relevant features of the proposed approaches.
2013-01-01
Background Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. Results We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as “pathwise”. The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. Conclusions As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the
Dynamic modelling and stability parametric analysis of a flexible spacecraft with fuel slosh
NASA Astrophysics Data System (ADS)
Gasbarri, Paolo; Sabatini, Marco; Pisculli, Andrea
2016-10-01
Modern spacecraft often contain large quantities of liquid fuel to execute station keeping and attitude manoeuvres for space missions. In general the combined liquid-structure system is very difficult to model, and the analyses are based on some assumed simplifications. A realistic representation of the liquid dynamics inside closed containers can be approximated by an equivalent mechanical system. This technique can be considered a very useful mathematical tool for solving the complete dynamics problem of a space-system containing liquid. Thus they are particularly useful when designing a control system or to study the stability margins of the coupled dynamics. The commonly used equivalent mechanical models are the mass-spring models and the pendulum models. As far as the spacecraft modelling is concerned they are usually considered rigid; i.e. no flexible appendages such as solar arrays or antennas are considered when dealing with the interaction of the attitude dynamics with the fuel slosh. In the present work the interactions among the fuel slosh, the attitude dynamics and the flexible appendages of a spacecraft are first studied via a classical multi-body approach. In particular the equations of attitude and orbit motion are first derived for the partially liquid-filled flexible spacecraft undergoing fuel slosh; then several parametric analyses will be performed to study the stability conditions of the system during some assigned manoeuvers. The present study is propaedeutic for the synthesis of advanced attitude and/or station keeping control techniques able to minimize and/or reduce an undesired excitation of the satellite flexible appendages and of the fuel sloshing mass.
Natural time analysis: An overview.
NASA Astrophysics Data System (ADS)
Sarlis, N. V.; Skordas, E. S.; Lazaridou, M. S.; Varotsos, P. A.
2012-04-01
Natural time, first brought forward a decade ago [1,2], has been recently reviewed [3]. It enables the identification of novel dynamical features hidden behind the time series of complex systems. Upon employing natural time, modern techniques of statistical physics in time series analysis (for example, Hurst analysis, the detrended fluctuation analysis, multifractal detrended fluctuation analysis, wavelet transform etc.) yield improved results. Natural time analysis has been shown to extract the maximum information possible in the study of the dynamical evolution of a complex system. It identifies when a system enters a critical stage. Hence, it plays a key role in predicting catastrophic events in general. We review a series of such examples including the analysis of avalanches of the penetration of magnetic flux into thin films of high Tc superconductors, the identification of sudden cardiac death risk, the recognition of electric signals that precede earthquakes and the determination of the time of an impending mainshock. In particular, we review cases of major earthquakes that occurred in Greece[4-7] and California[6-8] as well as discuss more recent results.
Harvey, Rebecca C
2017-01-01
Advanced gastric cancer (AGC) is one of the most common forms of cancer and remains difficult to cure. There is currently no recommended therapy for second-line AGC in the UK despite the availability of various interventions. This paper aims to compare different interventions for treatment of second-line AGC using more complex methods to estimate relative efficacy, fitting various parametric models and to compare results to those published adopting conventional methods of synthesis. Seven studies were identified in an existing literature review evaluating seven comparators, which formed a connected network of evidence. Citations were limited to randomised controlled trials in previously-treated AGC patients. Evidence quality was assessed using the Cochrane Collaboration's tool. Studies were assessed for the availability of Kaplan-Meier curves for overall survival. Individual patient data (IPD) were recreated using digitisation software along with a published algorithm in R. The data were analysed using multi-dimensional network meta-analysis (NMA) methods. A series of parametric models were fitted to the pseudo-IPD. Both fixed and random-effects models were fitted to explore long-term survival prospects based on extrapolation methods and estimated mean survival. Relative efficacy estimates were compared to those previously reported, which utilised conventional NMA methods. Results presented were consistent within findings from other publications and identified ramucirumab plus paclitaxel as the best treatment; however, all the treatments assessed were associated with poor survival prospects with mean survival estimates ranging from 5.0 to 12.7 months. Whilst the approach adopted in this paper does not adjust for differences in trial patient populations and is particularly data-intensive, use of such sophisticated methods of evidence synthesis may be more informative for subsequent cost-effectiveness modelling and may have greater impact when considering an
ERIC Educational Resources Information Center
Karpman, Mitchell
1986-01-01
The Johnson-Neyman (JN) technique is a parametric alternative to analysis of covariance that permits nonparallel regression lines. This article presents computer programs for J-N using the transformational languages of SPSS-X and SAS. The programs are designed for two groups and one covariate. (Author/JAZ)
NASA Astrophysics Data System (ADS)
Li, Jiang-Fan; Fang, Jia-Yuan; Xiao, Fu-Liang; Liu, Xin-Hai; Wang, Cheng-Zhi
2009-03-01
By properly selecting the time-dependent unitary transformation for the linear combination of the number operators, we construct a time-dependent invariant and derive the corresponding auxiliary equations for the degenerate and non-degenerate coupled parametric down-conversion system with driving term. By means of this invariant and the Lewis-Riesenfeld quantum invariant theory, we obtain closed formulae of the quantum state and the evolution operator of the system. We show that the time evolution of the quantum system directly leads to production of various generalized one- and two-mode combination squeezed states, and the squeezed effect is independent of the driving term of the Hamiltonian. In some special cases, the current solution can reduce to the results of the previous works.
Permutations and time series analysis.
Cánovas, Jose S; Guillamón, Antonio
2009-12-01
The main aim of this paper is to show how the use of permutations can be useful in the study of time series analysis. In particular, we introduce a test for checking the independence of a time series which is based on the number of admissible permutations on it. The main improvement in our tests is that we are able to give a theoretical distribution for independent time series.
Medina, Brandon Michael
2016-04-15
The motivation to precisely determine breakout time is to better understand initial motion. An analysis on the baseline is conducted to determine breakout time. The power in the baseline drops by a factor of ~6 after the breakout time occurs. The characteristic rounded step function of the baseline power can be modeled to calculate the breakout time. The characteristic rounded step function of the phase change in the baseline can be modeled to calculate the breakout time. Power and phase both seem to be viable sources that can be used to find breakout time effectively. The phase and power methods complement one another, so whenever one method does not work, the other can still be used. In some cases, the breakout time can be slightly shifted between phase and power. In the future, it would be good to develop a way to quantify the breakout time as well as the associated precision and accuracy.
Parametric analysis of the growth of colloidal ZnO nanoparticles synthesized in alcoholic medium
NASA Astrophysics Data System (ADS)
Fonseca, A. S.; Figueira, P. A.; Pereira, A. S.; Santos, R. J.; Trindade, T.; Nunes, M. I.
2017-02-01
The growth kinetics of nanosized ZnO was studied considering the influence of different parameters (mixing degree, temperature, alcohol chain length, reactant concentration and Zn/OH ratios) on the synthesis reaction and modelling the outputs using typical kinetic growth models, which were then evaluated by means of a sensitivity analysis. The Zn/OH ratio, the temperature and the alcohol chain length were found to be essential parameters to control the growth of ZnO nanoparticles, whereas zinc acetate concentration (for Zn/OH = 0.625) and the stirring during the ageing stage were shown to not have significant influence on the particle size growth. This last operational parameter was for the first time investigated for nanoparticles synthesized in 1-pentanol, and it is of outmost importance for the implementation of continuous industrial processes for mass production of nanosized ZnO and energy savings in the process. Concerning the nanoparticle growth modelling, the results show a different pattern from the more commonly accepted diffusion-limited Ostwald ripening process, i.e. the Lifshitz-Slyozov-Wagner (LSW) model. Indeed, this study shows that oriented attachment occurs during the early stages whereas for the later stages the particle growth is well represented by the LSW model. This conclusion contributes to clarify some controversy found in the literature regarding the kinetic model which better represents the ZnO NPs' growth in alcoholic medium.
NASA Astrophysics Data System (ADS)
Schneberk, D.
1985-07-01
The analysis component of the Enrichment Diagnostic System (EDS) developed for the Atomic Vapor Laser Isotope Separation Program (AVLIS) at Lawrence Livermore National Laboratory (LLNL) is described. Four different types of analysis are performed on data acquired through EDS: (1) absorption spectroscopy on laser-generated spectral lines, (2) mass spectrometer analysis, (3) general purpose waveform analysis, and (4) separation performance calculations. The information produced from this data includes: measures of particle density and velocity, partial pressures of residual gases, and overall measures of isotope enrichment. The analysis component supports a variety of real-time modeling tasks, a means for broadcasting data to other nodes, and a great degree of flexibility for tailoring computations to the exact needs of the process. A particular data base structure and program flow is common to all types of analysis. Key elements of the analysis component are: (1) a fast access data base which can configure all types of analysis, (2) a selected set of analysis routines, (3) a general purpose data manipulation and graphics package for the results of real time analysis.
Schneberk, D.
1985-07-01
This paper describes the analysis component of the Enrichment Diagnostic System (EDS) developed for the Atomic Vapor Laser Isotope Separation Program (AVLIS) at Lawrence Livermore National Laboratory (LLNL). Four different types of analysis are performed on data acquired through EDS: (1) absorption spectroscopy on laser-generated spectral lines, (2) mass spectrometer analysis, (3) general purpose waveform analysis, and (4) separation performance calculations. The information produced from this data includes: measures of particle density and velocity, partial pressures of residual gases, and overall measures of isotope enrichment. The analysis component supports a variety of real-time modeling tasks, a means for broadcasting data to other nodes, and a great degree of flexibility for tailoring computations to the exact needs of the process. A particular data base structure and program flow is common to all types of analysis. Key elements of the analysis component are: (1) a fast access data base which can configure all types of analysis, (2) a selected set of analysis routines, (3) a general purpose data manipulation and graphics package for the results of real time analysis. Each of these components are described with an emphasis upon how each contributes to overall system capability. 3 figs.
Zhang, W; Collins, A; Lonjou, C; Morton, N E
2002-01-01
Linkage tests to localize oligogenes have been extended during the past year. Using simulated data and multiplex selection we find that several tests on affected sib pairs have comparable power and type I error. Three variants of SIBPAL2 are favoured when substantial numbers of normal sibs are included, but performance relative to the BETA benchmark degrades rapidly as normal sibs are depleted by selective sampling or typing. Neglect of this fact may explain recent failure of retrospective collaboration to confirm asthma candidates in the 5q cytokine region that are supported by other studies. A fully quantitative trait favours variance components under complete ascertainment and two options in SIBPAL2 under multiplex selection, with substantial gain in power from covariance analysis if the covariate is independent of the candidate locus. A dichotomy and liability threshold give virtually identical results in the SOLAR variance components program. Comparison with single-marker parametric analysis suggests that extension to multiple markers would be competitive with nonparametric methods in power, and superior in depth of genetic analysis. The simulated examples illustrate common problems encountered with linkage scans for oligogenes. They show that nonparametric methods provide no panacea for analytical problems posed by different phenotypes and methods of ascertainment.
Longo, Dario Livio; Dastrù, Walter; Consolino, Lorena; Espak, Miklos; Arigoni, Maddalena; Cavallo, Federica; Aime, Silvio
2015-07-01
The objective of this study was to compare a clustering approach to conventional analysis methods for assessing changes in pharmacokinetic parameters obtained from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) during antiangiogenic treatment in a breast cancer model. BALB/c mice bearing established transplantable her2+ tumors were treated with a DNA-based antiangiogenic vaccine or with an empty plasmid (untreated group). DCE-MRI was carried out by administering a dose of 0.05 mmol/kg of Gadocoletic acid trisodium salt, a Gd-based blood pool contrast agent (CA) at 1T. Changes in pharmacokinetic estimates (K(trans) and vp) in a nine-day interval were compared between treated and untreated groups on a voxel-by-voxel analysis. The tumor response to therapy was assessed by a clustering approach and compared with conventional summary statistics, with sub-regions analysis and with histogram analysis. Both the K(trans) and vp estimates, following blood-pool CA injection, showed marked and spatial heterogeneous changes with antiangiogenic treatment. Averaged values for the whole tumor region, as well as from the rim/core sub-regions analysis were unable to assess the antiangiogenic response. Histogram analysis resulted in significant changes only in the vp estimates (p<0.05). The proposed clustering approach depicted marked changes in both the K(trans) and vp estimates, with significant spatial heterogeneity in vp maps in response to treatment (p<0.05), provided that DCE-MRI data are properly clustered in three or four sub-regions. This study demonstrated the value of cluster analysis applied to pharmacokinetic DCE-MRI parametric maps for assessing tumor response to antiangiogenic therapy. Copyright © 2015 Elsevier Inc. All rights reserved.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected
NASA Astrophysics Data System (ADS)
Allan, Alasdair
2014-06-01
FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.
Saunders, Marnie M; Schwentker, Edwards P; Kay, David B; Bennett, Gordon; Jacobs, Christopher R; Verstraete, Mary C; Njus, Glen O
2003-02-01
In this study, we developed an approach for prosthetic foot design incorporating motion analysis, mechanical testing and computer analysis. Using computer modeling and finite element analysis, a three-dimensional (3D), numerical foot model of the solid ankle cushioned heel (SACH) foot was constructed and analyzed based upon loading conditions obtained from the gait analysis of an amputee and validated experimentally using mechanical testing. The model was then used to address effects of viscoelastic heel performance numerically. This is just one example of the type of parametric analysis and design enabled by this approach. More importantly, by incorporating the unique gait characteristics of the amputee, these parametric analyses may lead to prosthetic feet more appropriately representing a particular user's needs, comfort and activity level.
Ruiz-Sanchez, Eduardo
2015-12-01
The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata.
CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions
Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.
Parametric analysis of factors affecting injection and production in geothermal reservoirs
Hornbrook, John W.; Faulder, D.D.
1993-01-28
A program was designed to allow the study of the effects of several parameters on the injection of water into and production of fluid from a fractured low porosity geothermal reservoir with properties similar to those at The Geysers. Fractures were modeled explicitly with low porosity, high permeability blocks rather than with a dual-porosity formulation to gain insight into the effects of single fractures. A portion of a geothermal reservoir with physical characteristics similar to those at the Geysers geothermal field was constructed by simulating a single fracture bounded by porous matrix. A series of simulation runs were made.using this system as a basis. Reservoir superheat prior to injection, injection temperature, angle of fracture inclination, fracture/matrix permeability contrast, fracture and matrix relative permeability, and the capillary pressure curves in both fracture and matrix were varied and the effects on production were compared. Analysis of the effects of these parameter variations led to qualitative conclusions about injection and production characteristics at the Geysers. The degree of superheat prior to water injection was found to significantly affect the production from geothermal reservoirs. A high degree of superheat prior to injection increases the enthalpy of the produced fluid and causes the cumulative produced energy to nearly equal that from a reservoir which began injection much earlier. Injection temperature was found to have very little effect on production characteristics. Angle of fracture inclination affects the enthalpy of the produced fluid. Fractures dipping toward the production well allow greater flow of water toward the producer resulting in lower enthalpies of produced fluid. The fracture/matrix permeability contrast was shown to influence the production in an expected way: The lower the contrast, the lower the production rate, and the lower the enthalpy of the produced fluid at a given time. Results obtained by varying
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N. K. G.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.
NASA Technical Reports Server (NTRS)
Guerreiro, Nelson M.; Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Lewis, Timothy A.
2016-01-01
A loss-of-separation (LOS) is said to occur when two aircraft are spatially too close to one another. A LOS is the fundamental unsafe event to be avoided in air traffic management and conflict detection (CD) is the function that attempts to predict these LOS events. In general, the effectiveness of conflict detection relates to the overall safety and performance of an air traffic management concept. An abstract, parametric analysis was conducted to investigate the impact of surveillance quality, level of intent information, and quality of intent information on conflict detection performance. The data collected in this analysis can be used to estimate the conflict detection performance under alternative future scenarios or alternative allocations of the conflict detection function, based on the quality of the surveillance and intent information under those conditions.Alternatively, this data could also be used to estimate the surveillance and intent information quality required to achieve some desired CD performance as part of the design of a new separation assurance system.
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; ...
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed naturalmore » gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.« less
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; Manley, Dawn Kataoka
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed natural gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.
Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.
2012-01-01
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408
Casellas, J; Tarrés, J; Piedrafita, J; Varona, L
2006-10-01
Given that correct assumptions on the baseline survival function are determinant for the validity of further inferences, specific tools to test the fit of a model to real data become essential in proportional hazards models. In this sense, we have proposed a parametric bootstrap to test the fit of survival models. Monte Carlo simulations are used to generate new data sets from the estimates obtained through the assumed models, and then bootstrap intervals can be established for the survival function along the time space studied. Significant fitting deficiencies are revealed when the real survival function is not included within the bootstrap interval. We tested this procedure in a survival data set of Bruna dels Pirineus beef calves, assuming 4 parametric models (exponential, Weibull, exponential time-dependent, Weibull time-dependent) and the Cox's semiparametric model. Fitting deficiencies were not observed for the Cox's model and the exponential time-dependent model, whereas the Weibull time-dependent model suffered from moderate overestimation at different ages. Thus, the exponential time-dependent model appears to be preferable because of its correct fit for survival data of beef calves and its smaller computational and time requirements. Exponential and Weibull models were completely rejected due to the continuous over- and underestimation of the survival probability reported. Results here highlighted the flexibility of parametric models with time-dependent effects, achieving a fit comparable to nonparametric models.
Fields, S.R.
1980-11-26
The generation of the response spectra was coupled to a parametric and sensitivity analysis. Support accelerations and tiedown forces are presented as functions of time. The parametric analysis found that the horizontal acceleration of the support and the MAR (max absolute relative) horizontal acceleration are relatively insensitive, while the corresponding vertical accelerations are highly sensitive to changes in 4 of the 13 parameters, and the corresponding rotational accelerations are highly sensitive to changes in 8 of the 13 parameters. The tiedown forces are moderately sensitive to changes in 3 of the parameters. (DLC)
Real time analysis of multichannel data in tokamaks
NASA Astrophysics Data System (ADS)
Wijnands, T.; Parlange, F.; Couturier, B.; Moulin, D.
1996-10-01
Four different techniques for the fast analysis of multichannel data in plasma physics are discussed. All four of these techniques are general and sufficiently fast to be used in real time applications. Function parametrization, canonical correlation analysis and a neural network of the multilayer perceptron (MLP) type are compared with a unique linear mapping based on a singular value decomposition, which is used as a reference. Applications deal with the identification of the plasma boundary and some global plasma parameters in the DIII-D and the Tore Supra tokamaks by using magnetic measurements. The results of an MLP-1 neural network, employed for the real time plasma position determination in Tore Supra, are presented
Parametric analysis of the ARIES-III D-{sup 3}He tokamak reactor
Bathke, C.G.; Werley, K.A.; Miller, R.L.; Krakowski, R.A.; Santarius, J.F.
1993-08-01
The multi-institutional ARIES study has completed a series of conceptual designs of tokamak fusion reactors that varies the assumed advances in technology and physics. The ARIES-III design uses a D- {sup 3}He fuel cycle and requires significant advances in physics to enhance economic attractiveness. The optimal design was characterized through systems analysis for eventual conceptual engineering design. Results from the systems analysis are summarized, and a comparison with the other ARIES designs is included.
NASA Astrophysics Data System (ADS)
Nguyen, Frédéric; Hermans, Thomas
2015-04-01
Inversion of time-lapse resistivity data allows obtaining 'snapshots' of changes occurring in monitored systems for applications such as aquifer storage, geothermal heat exchange, site remediation or tracer tests. Based on these snapshots, one can infer qualitative information on the location and morphology of changes occurring in the subsurface but also quantitative estimates on the degree of changes in certain property such as temperature or total dissolved solid content. Analysis of these changes can provide direct insight into flow and transport and associated processes and controlling parameters. However, the reliability of the analysis is dependent on survey geometry, measurement schemes, data error, and regularization. Survey design parameters may be optimized prior to the monitoring survey. Regularization, on the other hand, may be chosen depending on available information collected during the monitoring. Common approaches consider smoothing model changes both in space and time but it is often needed to obtain a sharp temporal anomaly, for example in fractured aquifers. We here propose to use the alternative regularization approach based on minimum gradient support (MGS) (Zhdanov, 2002) for time-lapse surveys which will focus the changes in tomograms snapshots. MGS will limit the occurrences of changes in electrical resistivity but will also restrict the variations of these changes inside the different zones. A commonly encountered difficulty by practitioners in this type of regularization is the choice of an additional parameter, the so-called β, required to define the MGS functional. To the best of our knowledge, there is no commonly accepted or standard methodology to optimize the MGS parameter β. The inversion algorithm used in this study is CRTomo (Kemna 2000). It uses a Gauss-Newton scheme to iteratively minimize an objective function which consists of a data misfit functional and a model constraint functional. A univariate line search is performed
Arahira, Shin; Murai, Hitoshi; Sasaki, Hironori
2016-08-22
In this paper we report the generation of wavelength-division-multiplexed, time-bin entangled photon pairs by using cascaded optical second nonlinearities (sum-frequency generation and subsequent spontaneous parametric downconversion) in a periodically poled LiNbO_{3} device. Visibilities of approximately 94% were clearly observed in two-photon interference experiments for all the wavelength-multiplexed channels under investigation (five pairs), with insensitivity to the polarization states of the photon pairs. We also evaluated the performances in terms of quantum-key-distribution (QKD) applications by using four single-photon detectors, which enables to evaluate the QKD performance properly. The results showed long-term stability over 70 hours, maintaining approximately 3% of the quantum error rate and 110 bit/s of the sifted key rate.
Farcomeni, Alessio; Serranti, Silvia; Bonifazi, Giuseppe
2008-01-01
Glass ceramic detection in glass recycling plants represents a still unsolved problem, as glass ceramic material looks like normal glass and is usually detected only by specialized personnel. The presence of glass-like contaminants inside waste glass products, resulting from both industrial and differentiated urban waste collection, increases process production costs and reduces final product quality. In this paper an innovative approach for glass ceramic recognition, based on the non-parametric analysis of infrared spectra, is proposed and investigated. The work was specifically addressed to the spectral classification of glass and glass ceramic fragments collected in an actual recycling plant from three different production lines: flat glass, colored container-glass and white container-glass. The analyses, carried out in the near and mid-infrared (NIR-MIR) spectral field (1280-4480 nm), show that glass ceramic and glass fragments can be recognized by applying a wavelet transform, with a small classification error. Moreover, a method for selecting only a small subset of relevant wavelength ratios is suggested, allowing the conduct of a fast recognition of the two classes of materials. The results show how the proposed approach can be utilized to develop a classification engine to be integrated inside a hardware and software sorting architecture for fast "on-line" ceramic glass recognition and separation.
Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; Sutton, Michael A.
2016-01-25
Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions, the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.
Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; ...
2016-01-25
Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less
Peng, Bo; Kowalski, Karol
2016-12-23
In this paper we derive basic properties of the Green’s function matrix elements stemming from the exponential coupled cluster (CC) parametrization of the ground-state wave function. We demon- strate that all intermediates used to express retarded (or equivalently, ionized) part of the Green’s function in the ω-representation can be expressed through connected diagrams only. Similar proper- ties are also shared by the first order ω-derivatives of the retarded part of the CC Green’s function. This property can be extended to any order ω-derivatives of the Green’s function. Through the Dyson equation of CC Green’s function, the derivatives of corresponding CC self-energy can be evaluated analytically. In analogy to the CC Green’s function, the corresponding CC self-energy is expressed in terms of connected diagrams only. Moreover, the ionized part of the CC Green’s func- tion satisfies the non-homogeneous linear system of ordinary differential equations, whose solution may be represented in the exponential form. Our analysis can be easily generalized to the advanced part of the CC Green’s function.
Deng, Xingqiao; Potula, S; Grewal, H; Solanki, K N; Tschopp, M A; Horstemeyer, M F
2013-06-01
In this study, we investigated and assessed the dependence of dummy head injury mitigation on the side curtain airbag and occupant distance under a side impact of a Dodge Neon. Full-scale finite element vehicle simulations of a Dodge Neon with a side curtain airbag were performed to simulate the side impact. Owing to the wide range of parameters, an optimal matrix of finite element calculations was generated using the design method of experiments (DOE); the DOE method was performed to independently screen the finite element results and yield the desired parametric influences as outputs. Also, analysis of variance (ANOVA) techniques were used to analyze the finite element results data. The results clearly show that the influence of moving deformable barrier (MDB) strike velocity was the strongest influence parameter on both cases for the head injury criteria (HIC36) and the peak head acceleration, followed by the initial airbag inlet temperature. Interestingly, the initial airbag inlet temperature was only a ~30% smaller influence than the MDB velocity; also, the trigger time was a ~54% smaller influence than the MDB velocity when considering the peak head accelerations. Considering the wide range in MDB velocities used in this study, results of the study present an opportunity for design optimization using the different parameters to help mitigate occupant injury. As such, the initial airbag inlet temperature, the trigger time, and the airbag pressure should be incorporated into vehicular design process when optimizing for the head injury criteria.
ERIC Educational Resources Information Center
Guccio, Calogero; Martorana, Marco Ferdinando; Mazza, Isidoro
2017-01-01
The paper assesses the evolution of efficiency of Italian public universities for the period 2000-2010. It aims at investigating whether their levels of efficiency showed signs of convergence, and if the well-known disparity between northern and southern regions decreased. For this purpose, we use a refinement of data envelopment analysis, namely…
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Time series analysis of injuries.
Martinez-Schnell, B; Zaidi, A
1989-12-01
We used time series models in the exploratory and confirmatory analysis of selected fatal injuries in the United States from 1972 to 1983. We built autoregressive integrated moving average (ARIMA) models for monthly, weekly, and daily series of deaths and used these models to generate hypotheses. These deaths resulted from six causes of injuries: motor vehicles, suicides, homicides, falls, drownings, and residential fires. For each cause of injury, we estimated calendar effects on the monthly death counts. We confirmed the significant effect of vehicle miles travelled on motor vehicle fatalities with a transfer function model. Finally, we applied intervention analysis to deaths due to motor vehicles.
Introduction to Time Series Analysis
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1986-01-01
The field of time series analysis is explored from its logical foundations to the most modern data analysis techniques. The presentation is developed, as far as possible, for continuous data, so that the inevitable use of discrete mathematics is postponed until the reader has gained some familiarity with the concepts. The monograph seeks to provide the reader with both the theoretical overview and the practical details necessary to correctly apply the full range of these powerful techniques. In addition, the last chapter introduces many specialized areas where research is currently in progress.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Parametric analysis of response interruption and redirection as treatment for stereotypy.
Saini, Valdeep; Gregory, Meagan K; Uran, Kirstin J; Fantetti, Michael A
2015-01-01
Response interruption and redirection (RIRD), a procedure in which demands are delivered contingent on stereotypy, has been shown to reduce vocal and motor stereotypy maintained by automatic reinforcement. However, RIRD can be time consuming and can interrupt ongoing activities and access to reinforcement for appropriate behavior. We attempted to address these limitations by comparing the effectiveness of RIRD using the standard 3-demand procedure to RIRD using just 1 demand. Results showed that RIRD with 1 demand was effective in reducing stereotypy for all participants, required fewer demands overall, and resulted in shorter implementation time. In addition, 2 participants showed an increase in appropriate play during RIRD. These results suggest RIRD with 1 demand may be an effective and less intrusive procedure for reducing stereotypy.
Shen, Chia-Hsuan; Choy, Fred K; Chen, Yuerong; Wang, Shengyong
2013-07-01
In the present work, a modularized approach to computer-aided auscultation based on the traditional cardiac auscultation of murmur is proposed. Under such an approach, the present paper concerns the task of evaluating murmur acoustic quality character. The murmurs were analyzed in their time-series representation, frequency representation as well as time-frequency representation, allowing extraction of interpretable features based on their signal structural and spectral characters. The features were evaluated using scatter plots, receiver operating characteristic curves (ROC), and numerical experiments using a KNN classifier. The possible physiological and hemodynamical associations with the feature set are made. The implication and advantage of the modular approach are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rayleigh-type parametric chemical oscillation.
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Rayleigh-type parametric chemical oscillation
NASA Astrophysics Data System (ADS)
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-01
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
The LTS timing analysis program :
Armstrong, Darrell Jewell; Schwarz, Jens
2013-08-01
The LTS Timing Analysis program described in this report uses signals from the Tempest Lasers, Pulse Forming Lines, and Laser Spark Detectors to carry out calculations to quantify and monitor the performance of the the Z-Accelerators laser triggered SF6 switches. The program analyzes Z-shots beginning with Z2457, when Laser Spark Detector data became available for all lines.
Washout allometric reference method (WARM) for parametric analysis of [(11)C]PIB in human brains.
Rodell, Anders; Aanerud, Joel; Braendgaard, Hans; Gjedde, Albert
2013-01-01
Rapid clearance and disappearance of a tracer from the circulation challenges the determination of the tracer's binding potentials in brain (BP ND) by positron emission tomography (PET). This is the case for the analysis of the binding of radiolabeled [(11)C]Pittsburgh Compound B ([(11)C]PIB) to amyloid-β (Aβ) plaques in brain of patients with Alzheimer's disease (AD). To resolve the issue of rapid clearance from the circulation, we here introduce the flow-independent Washout Allometric Reference Method (WARM) for the analysis of washout and binding of [(11)C]PIB in two groups of human subjects, healthy aged control subjects (HC), and patients suffering from AD, and we compare the results to the outcome of two conventional analysis methods. We also use the rapid initial clearance to obtain a surrogate measure of the rate of cerebral blood flow (CBF), as well as a method of identifying a suitable reference region directly from the [(11)C]PIB signal. The difference of average absolute CBF values between the AD and HC groups was highly significant (P < 0.003). The CBF measures were not significantly different between the groups when normalized to cerebellar gray matter flow. Thus, when flow differences confound conventional measures of [(11)C]PIB binding, the separate estimates of CBF and BP ND provide additional information about possible AD. The results demonstrate the importance of data-driven estimation of CBF and BP ND, as well as reference region detection from the [(11)C]PIB signal. We conclude that the WARM method yields stable measures of BP ND with relative ease, using only integration for noise reduction and no model regression. The method accounts for relative flow differences in the brain tissue and yields a calibrated measure of absolute CBF directly from the [(11)C]PIB signal. Compared to conventional methods, WARM optimizes the Aβ plaque load discrimination between patients with AD and healthy controls (P = 0.009).
Hybrid optimization for 13C metabolic flux analysis using systems parametrized by compactification
Yang, Tae Hoon; Frick, Oliver; Heinzle, Elmar
2008-01-01
Background The importance and power of isotope-based metabolic flux analysis and its contribution to understanding the metabolic network is increasingly recognized. Its application is, however, still limited partly due to computational inefficiency. 13C metabolic flux analysis aims to compute in vivo metabolic fluxes in terms of metabolite balancing extended by carbon isotopomer balances and involves a nonlinear least-squares problem. To solve the problem more efficiently, improved numerical optimization techniques are necessary. Results For flux computation, we developed a gradient-based hybrid optimization algorithm. Here, independent flux variables were compactified into [0, 1)-ranged variables using a single transformation rule. The compactified parameters could be discriminated between non-identifiable and identifiable variables after model linearization. The developed hybrid algorithm was applied to the central metabolism of Bacillus subtilis with only succinate and glutamate as carbon sources. This creates difficulties caused by symmetry of succinate leading to limited introduction of 13C labeling information into the system. The algorithm was found to be superior to its parent algorithms and to global optimization methods both in accuracy and speed. The hybrid optimization with tolerance adjustment quickly converged to the minimum with close to zero deviation and exactly re-estimated flux variables. In the metabolic network studied, some fluxes were found to be either non-identifiable or nonlinearly correlated. The non-identifiable fluxes could correctly be predicted a priori using the model identification method applied, whereas the nonlinear flux correlation was revealed only by identification runs using different starting values a posteriori. Conclusion This fast, robust and accurate optimization method is useful for high-throughput metabolic flux analysis, a posteriori identification of possible parameter correlations, and also for Monte Carlo
Computational Parametric Analysis of Mechanical Behaviors of Celotex Implanted with Glue Plates
Gong, C.
2001-02-20
The purpose of this analysis of the Celotex implanted with glue plates is two-fold, first is to identify the cause of the initial stress peak in the pseudo engineering stress-strain curve in the dynamic impact test that the impact is loaded in the orientation parallel to the plane of the glue. Secondly, from the existing static mechanical properties to derive the true constitutive properties of the Celotex under dynamic impact and other environmental conditions, such as warm (250 degrees Fahrenheit), wet (100 percent relative humidity), cold (minus 40 degrees Fahrenheit), and desiccated.
Dynamics of gas and particulate clouds: parametric analysis of cloud motion
NASA Astrophysics Data System (ADS)
Anderson, Mark E.; Larsen, Jeremy C.; Cornelsen, Scott S.; Call, Seth T.; Stokes, Scott T.; Earl, Curtis L.; Hayes, Travis M.; Wilkerson, Thomas D.
2004-09-01
This paper describes a project on automating the interpretation of cloud images recorded during several types of atmospheric observations: (1) dust clouds generated by controlled explosions, (2) chemical releases of infrared-active gases, and (3) lidar measurements of cloud altitude winds. This program began with a basic cloud tracking system for lidar comparisons, which has since been upgraded. We describe automated methods for tracking clouds of relatively constant shape, segmenting time-dependent clouds and plumes from scenic backgrounds, characterizing cloud and plume shapes, and measuring the speed and direction of cloud motion. Dust clouds were created by fireworks, releases of pressurized aerosols and by propane-driven blast tubes. Chemical clouds of organic vapors were created by evaporation or with pressurized balloon releases. Cloud imagery for particle releases was recorded primarily with a pair of visible video cameras. The chemical clouds were imaged with a high framing rate infrared camera in the 2.5 - 3.5 micron region. Current project goals include an end-to-end system for cloud warnings, wind measurement, and dispersion predictions in real time.
NASA Astrophysics Data System (ADS)
Yusha, V. L.; Busarov, S. S.; Vasil'ev, V. K.; Gromov, A. Yu.; Nedovenchanyj, A. V.
2017-08-01
The computational and parametric analysis of the of the operating process efficiency in an airless, slow-speed, long-stroke stage of a medium-pressure compressor unit was carried out. The influence on the discharge temperature, the indicator efficiency and the feed rate of the main structural and mode parameters of the stage are considered. It is shown that cycle time, cylinder diameter and stroke have significant influence on the economical efficiency of the operating process and the temperature mode of the stage and can be regarded as optimization parameters while developing a reciprocating stage of such type.
Karathanasis, Nestoras; Tsamardinos, Ioannis
2016-01-01
Background The advance of omics technologies has made possible to measure several data modalities on a system of interest. In this work, we illustrate how the Non-Parametric Combination methodology, namely NPC, can be used for simultaneously assessing the association of different molecular quantities with an outcome of interest. We argue that NPC methods have several potential applications in integrating heterogeneous omics technologies, as for example identifying genes whose methylation and transcriptional levels are jointly deregulated, or finding proteins whose abundance shows the same trends of the expression of their encoding genes. Results We implemented the NPC methodology within “omicsNPC”, an R function specifically tailored for the characteristics of omics data. We compare omicsNPC against a range of alternative methods on simulated as well as on real data. Comparisons on simulated data point out that omicsNPC produces unbiased / calibrated p-values and performs equally or significantly better than the other methods included in the study; furthermore, the analysis of real data show that omicsNPC (a) exhibits higher statistical power than other methods, (b) it is easily applicable in a number of different scenarios, and (c) its results have improved biological interpretability. Conclusions The omicsNPC function competitively behaves in all comparisons conducted in this study. Taking into account that the method (i) requires minimal assumptions, (ii) it can be used on different studies designs and (iii) it captures the dependences among heterogeneous data modalities, omicsNPC provides a flexible and statistically powerful solution for the integrative analysis of different omics data. PMID:27812137
NASA Astrophysics Data System (ADS)
Chen, Hua; Zhang, Jin; Fan, Zebin; Peng, Jinhui; Ju, Shaohua
2017-09-01
Microwave-assisted heating technology has become a popular alternative to conventional heating technologies because of its many advantages. However, the matching performance of microwave heating system is of particular concern because it provides an important index of the utilization efficiency of microwave energy. In this work, a new microwave heating system is designed by the theory of optical resonator in first. Then the comprehensive analysis of the mutual coupling of high sensitive geometrical and material parameters were investigated based on this new microwave heating system at 2.45 GHz. It is demonstrated that the thickness of materials dramatically influences microwave energy absorption efficiency and should be carefully considered and perhaps given priority. Moreover, it is shown that matching performance is the best when the titanium concentrates thickness at about 0.075 m.
Parametric analysis of stand-alone residential photovoltaic systems and the SOLSTOR simulation model
NASA Astrophysics Data System (ADS)
Caskey, D. L.; Aronson, E. A.; Murphy, K. D.
Grid-connected residential photovoltaic (PV) systems have been studied in great detail during the past few years. However, stand-alone systems have received considerably less attention. This paper describes the results of an evaluation of the economic feasibility of stand-alone systems. The SOLSTOR simulation program, developed by Sandia, was the primary analysis tool. The results indicate that stand-alone PV systems offer considerable economic advantage over the fossil-fueled generator systems. This is true even with no escalation of fuel prices, with PV array costs of twice the 1986 DOE goal, with present day battery costs, and in the Northeast as well as in the Southwest part of the United States. The on-site generator was generally used less than 1400 hours per year, and in fact can be eliminated in many cases in the Southwest.
Midgley, S. L. W.; Olsen, M. K.; Bradley, A. S.; Pfister, O.
2010-11-15
We examine the feasibility of generating continuous-variable multipartite entanglement in an intracavity concurrent downconversion scheme that has been proposed for the generation of cluster states by Menicucci et al. [Phys. Rev. Lett. 101, 130501 (2008)]. By calculating optimized versions of the van Loock-Furusawa correlations we demonstrate genuine quadripartite entanglement and investigate the degree of entanglement present. Above the oscillation threshold the basic cluster state geometry under consideration suffers from phase diffusion. We alleviate this problem by incorporating a small injected signal into our analysis. Finally, we investigate squeezed joint operators. While the squeezed joint operators approach zero in the undepleted regime, we find that this is not the case when we consider the full interaction Hamiltonian and the presence of a cavity. In fact, we find that the decay of these operators is minimal in a cavity, and even depletion alone inhibits cluster state formation.
Performance analysis and parametric optimum criteria of a regeneration Bose-Otto engine
NASA Astrophysics Data System (ADS)
Wang, Hao; Liu, Sanqiu; Du, Jianqiang
2009-05-01
A general regenerative model of the Otto engine cycle working with an ideal Bose gas is used to discuss the influence of quantum degeneracy, regeneration and finite rate heat transfer on the performance of the cycle. Based on the model, expressions for some important parameters, such as the power output and efficiency of the Bose-Otto engine cycle, are derived analytically. By means of numerical calculation and illustration, the influence of the compression ratio of the two isochoric processes and the regenerator effectiveness on the performance of the cycle are discussed and evaluated in detail. Moreover, the general optimal performance characteristics of the cycle are revealed. This analysis could provide a general theoretical tool for the optimal design and operation of real power plants.
Single sweep analysis of visual evoked potentials through a model of parametric identification.
Cerutti, S; Baselli, G; Liberati, D; Pavesi, G
1987-01-01
An original method is presented for the single sweep analysis of visual evoked potentials (VEP's). The introduced algorithm bases upon an AutoRegressive with eXogenous input (ARX) modeling. A Least Squares procedure estimates the coefficients of the model and allows to obtain a complete black-box description of the signal generation mechanism, besides providing a filtered version of the single sweep potential. The performance of the algorithm is verified on proper simulation tests and the experimental results put into evidence the noticeable improvement of signal-to-noise ratio with a consequent better recognition of the classical parameters of the peaks (latencies and amplitudes). The possibility of measuring these parameters on a single sweep basis enables to evaluate the dynamics of the Central Nervous System response during the entire course of the examination. A classification of the estimated evoked potentials in a small number of subsets, on the basis of their morphology, is also possible.
Parametric Analysis of PWR Spent Fuel Depletion Parameters for Long-Term-Disposal Criticality Safety
DeHart, M.D.
1999-08-01
Utilization of burnup credit in criticality safety analysis for long-term disposal of spent nuclear fuel allows improved design efficiency and reduced cost due to the large mass of fissile material that will be present in the repository. Burnup-credit calculations are based on depletion calculations that provide a conservative estimate of spent fuel contents (in terms of criticality potential), followed by criticality calculations to assess the value of the effective neutron multiplication factor (k(sub)eff) for the a spent fuel cask or a fuel configuration under a variety of probabilistically derived events. In order to ensure that the depletion calculation is conservative, it is necessary to both qualify and quantify assumptions that can be made in depletion models.
Parametric analysis of synthetic aperture radar data for characterization of deciduous forest stands
NASA Technical Reports Server (NTRS)
Wu, Shih-Tseng
1987-01-01
The SAR sensor parameters that affect the estimation of deciduous forest stand characteristics were examined using data sets for the Gulf Coastal Plain region, acquired by the NASA/JPL multipolarization airborne SAR. In the regression analysis, the mean digital-number values of the three polarization data are used as the independent variables to estimate the average tree height (HT), basal area (BA), and total-tree biomass (TBM). The following results were obtained: (1) in the case of simple regression and using 28 plots, vertical-vertical (VV) polarization yielded the largest correlation coefficients (r) in estimating HT, BA, and TBM; (2) in the case of multiple regression, the horizontal-horizontal (HH) and VV polarization combination yielded the largest r value in estimating HT, while the VH and HH polarization combination yielded the largest r values in estimating BA and TBM. With the addition of a third polarization, the increase in r values is insignificant.
Ginsberg, J.H.; Rosenkilde, C.E.
1985-02-01
A program was initiated several years ago with the goal of developing a simulation of the underwater shock response of submarines. This capability was to be employed for systems analysis studies of a variety of tactical parameters, especially the orientation of the shock wave relative to the structure. It was deemed to be acceptable to develop generic analytical models that avoided the details of specific structures, and the number of degrees of freedom was limited by the need for computational efficiency. The present paper shows how these objectives were met by using the modal expansion version of the doubly asymptotic approximation. This technique is implemented in a modular form that permits progressive enhancements of a basic model. The concept is to partition the system into substructures that are not inertially coupled. Based on these concepts three varients on a basic model are described; each modification provides the capability to evaluate different aspects of the system. The basic model consists of a rigid cylinder capped by hemispheres which is subjected to a shock wave arriving from an arbitrary heading angle. The REFLECT model extends this fundamental model to treat multiple shock waves associated with reflections from the surface and bottom. This analysis employs a vectorial superposition of the CAPS response for each incident wave. In order to assess the importance of an internal dead load, such as a reactor, the CAPS model was extended. The new version includes a large internal mass suspended from the cylinder by four springs that give equivalent axial, transverse and rotational suspension stiffnesses. The center of mass of this dead load can be located arbitrarily along the axis of symmetry of the cylinder. The TDEF model of structural deformation was the last to be developed.
Ginsberg, J.H.; Rosenkilde, C.E.
1985-02-01
A program was initiated several years ago with the goal of developing a simulation of the underwater shock response of submarines. This capability was to be employed for systems analysis studies of a variety of tactical parameters, especially the orientation of the shock wave relative to the structure. It was deemed to be acceptable to develop generic analytical models that avoided the details of specific structures, and the number of degrees of freedom was limited by the need for computational efficiency. The present paper shows how these objectives were met by using the modal expansion version of the doubly asymptotic approximation (DAA). This technique is implemented in a modular form that permits progressive enhancements of a basic model. The concept is to partition the system into substructures that are not inertially coupled. Orthogonal assumed modes for the system are readily identified as the modes of free vibration for the isolated substructure. Based on these concepts three variants on a basic model are described; each modification provides the capability to evaluate different aspects of the system. The basic model, which is the original version of CAPS, consists of a rigid cylinder capped by hemispheres which is subjected to a shock wave arriving from an arbitrary heading angle. The REFLECT model extends this fundamental model to treat multiple shock waves associated with reflections from the surface and bottom. This analysis employs a vectorial superposition of the CAPS response for each incident wave. In order to assess the importance of an internal dead load, such as a reactor, the CAPS model was extended. The new version includes a large internal mass suspended from the cylinder by four springs that give equivalent axial, transverse and rotational suspension stiffnesses. The center of mass of this dead load can be located arbitrarily along the axis of symmetry of the cylinder.
NASA Technical Reports Server (NTRS)
Turner, R. E.
1977-01-01
For 36 hours during April 1975, an atmospheric variability experiment was conducted. This research effort supported an observational program in which rawinsonde data, radar data, and satellite data were collected from a network of 42 stations east of the Rocky Mountains at intervals of 3 hours. This program presents data with a high degree of time resolution over a spatially and temporally extensive network. Reduction of the experiment data is intended primarily as a documentation of the checking and processing of the data and should be useful to prospective users. Various flow diagrams of the data processing procedures are described, and a complete summary of the formulas used in the data processing is provided. A wind computation scheme designed to extract as much detailed wind information as possible from the unique experiment data set is discussed. The accuracy of the thermodynamic and wind data were estimated. Errors in the thermodynamic and wind data are given.
Performance analysis and parametric optimum criteria of an irreversible Bose-Otto engine
NASA Astrophysics Data System (ADS)
Wang, Hao; Liu, Sanqiu; He, Jizhou
2009-04-01
An irreversible cycle model of a Bose-Otto engine is established, in which finite time thermodynamic processes and the irreversibility result from the nonisentropic compression and expansion processes are taken into account. Based on the model, expressions for the power output and efficiency of the Bose-Otto engine are derived. On the basis of the thermodynamic properties of ideal Bose gas, the effects of the irreversibility and the compression ratio of the two isochoric processes on the performance of the Bose-Otto engine are revealed and some important performance parameters are optimized. Furthermore, some optimal operating regions including those for the power output, efficiency, and the temperatures of the cyclic working substance at two important state points are determined and evaluated. Finally, several special cases are discussed in detail.
NASA Astrophysics Data System (ADS)
Little, Duncan A.; Tennyson, Jonathan; Plummer, Martin; Noble, Clifford J.; Sunderland, Andrew G.
2017-06-01
TIMEDELN implements the time-delay method of determining resonance parameters from the characteristic Lorentzian form displayed by the largest eigenvalues of the time-delay matrix. TIMEDELN constructs the time-delay matrix from input K-matrices and analyses its eigenvalues. This new version implements multi-resonance fitting and may be run serially or as a high performance parallel code with three levels of parallelism. TIMEDELN takes K-matrices from a scattering calculation, either read from a file or calculated on a dynamically adjusted grid, and calculates the time-delay matrix. This is then diagonalized, with the largest eigenvalue representing the longest time-delay experienced by the scattering particle. A resonance shows up as a characteristic Lorentzian form in the time-delay: the programme searches the time-delay eigenvalues for maxima and traces resonances when they pass through different eigenvalues, separating overlapping resonances. It also performs the fitting of the calculated data to the Lorentzian form and outputs resonance positions and widths. Any remaining overlapping resonances can be fitted jointly. The branching ratios of decay into the open channels can also be found. The programme may be run serially or in parallel with three levels of parallelism. The parallel code modules are abstracted from the main physics code and can be used independently.
Dimethylsulfide model calibration and parametric sensitivity analysis for the Greenland Sea
NASA Astrophysics Data System (ADS)
Qu, Bo; Gabric, Albert J.; Zeng, Meifang; Xi, Jiaojiao; Jiang, Limei; Zhao, Li
2017-09-01
Sea-to-air fluxes of marine biogenic aerosols have the potential to modify cloud microphysics and regional radiative budgets, and thus moderate Earth's warming. Polar regions play a critical role in the evolution of global climate. In this work, we use a well-established biogeochemical model to simulate the DMS flux from the Greenland Sea (20°W-10°E and 70°N-80°N) for the period 2003-2004. Parameter sensitivity analysis is employed to identify the most sensitive parameters in the model. A genetic algorithm (GA) technique is used for DMS model parameter calibration. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are used to drive the DMS model under 4 × CO2 conditions. DMS flux under quadrupled CO2 levels increases more than 300% compared with late 20th century levels (1 × CO2). Reasons for the increase in DMS flux include changes in the ocean state-namely an increase in sea surface temperature (SST) and loss of sea ice-and an increase in DMS transfer velocity, especially in spring and summer. Such a large increase in DMS flux could slow the rate of warming in the Arctic via radiative budget changes associated with DMS-derived aerosols.
Data Fusion: A decision analysis tool that quantifies geological and parametric uncertainty
Porter, D.W.
1996-04-01
Engineering projects such as siting waste facilities and performing remediation are often driven by geological and hydrogeological uncertainties. Geological understanding and hydrogeological parameters such as hydraulic conductivity are needed to achieve reliable engineering design. Information from non-invasive and minimally invasive data sets offers potential for reduction in uncertainty, but a single data type does not usually meet all needs. Data Fusion uses Bayesian statistics to update prior knowledge with information from diverse data sets as the data is acquired. Prior knowledge takes the form of first principles models (e.g., groundwater flow) and spatial continuity models for heterogeneous properties. The variability of heterogeneous properties is modeled in a form motivated by statistical physics as a Markov random field. A computer reconstruction of targets of interest is produced within a quantified statistical uncertainty. The computed uncertainty provides a rational basis for identifying data gaps for assessing data worth to optimize data acquisition. Further, the computed uncertainty provides a way to determine the confidence of achieving adequate safety margins in engineering design. Beyond design, Data Fusion provides the basis for real time computer monitoring of remediation. Working with the DOE Office of Technology (OTD), the author has developed and patented a Data Fusion Workstation system that has been used on jobs at the Hanford, Savannah River, Pantex and Fernald DOE sites. Further applications include an army depot at Letterkenney, PA and commercial industrial sites.
Parametric analysis of a complex flow: a mixing layer- wake interaction
NASA Astrophysics Data System (ADS)
Braud, Caroline; Heitz, Dominique
2004-11-01
The flow studied is a cross interaction from two canonical flows, a wake and a mixing layer, realized for the first time by Heitz(1999) for a particular configuration (Reynolds number 7, 500, aspect ratio 18, shear parameter 0.42). He found that the difference of velocity between two sides of the mixing layer gives rise to a pressure gradient along the cylinder responsible for a secondary flow from the low to high velocity side in the near wake. As a consequence the suction coefficient (C_p), the frequency distribution (f) and the vortex length formation (L_f) are locally greatly modified if we compare them to uniform flow. Nevertheless, the aspect ratio, according to Norberg (1994), and the shear parameter (in uniform shear wake: 0.02) are control parameters of great importance for the shedding organization in the near wake. In order to investigate the influence of those control parameters (Reynolds numbers, two aspects ratio and five shear parameters) hot wire and pressure measurement have been carried out. We mainly found that even if the magnitude of the mean characteristic parameters is locally strongly perturbated along the cylinder, the resulting wake keeps its general behavior.
Parametric analysis of a novel cryogenic CO2 capture system based on Stirling coolers.
Song, Chun Feng; Kitamura, Yutaka; Li, Shu Hong; Jiang, Wei Zhong
2012-11-20
CO(2) capture and storage (CCS) is an important alternative to control greenhouse gas (GHG) effects. In previous work, a novel desublimation CO(2) capture process has been exploited making use of three free piston Stirling coolers (namely, SC-1, SC-2, and SC-3, respectively). Based on the developed system, moisture and CO(2) in the flue gas can condense and desublimate in the prefreezing and main-freezing towers, respectively. Meanwhile, the storage column is chilled by SC-3 to preserve the frosted CO(2), and permanent gas (such as N(2)) passes through the system without phase change. The whole process can be implemented at atmospheric pressure and reduce the energy penalty (e.g., solvent regeneration and pressure drop) in other technologies. In this work, the influence of process parameters has been investigated in detail. The optimal conditions for the system are as follows: idle operating time is 240 min, flow rate is 5 L/min, vacuum degree of the interlayer is 2.2 × 10(3) Pa, and temperatures of SC-1, -2, and -3 are -30, -120, and -120 °C, respectively. Under these conditions, the energy consumption of the system is around 0.5 MJ(electrical)/kg CO(2) with above 90% CO(2) recovery.
Parametric Analysis of Life Support Systems for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly S.; Bagdigian, Bob M.
2010-01-01
Having adopted a flexible path approach to space exploration, the National Aeronautics and Space Administration is in a process of evaluating future targets for space exploration. In order to maintain the welfare of a crew during future missions, a suite of life support technology is responsible for oxygen and water generation, carbon dioxide control, the removal of trace concentrations of organic contaminants, processing and recovery of water, and the storage and reclamation of solid waste. For each particular life support subsystem, a variety competing technologies either exist or are under aggressive development efforts. Each individual technology has strengths and weaknesses with regard to launch mass, power and cooling requirements, volume of hardware and consumables, and crew time requirements for operation. However, from a system level perspective, the favorability of each life support architecture is better assessed when the sub-system technologies are analyzed in aggregate. In order to evaluate each specific life support system architecture, the measure of equivalent system mass (ESM) was employed to benchmark system favorability. Moreover, the results discussed herein will be from the context of loop-closure with respect to the air, water, and waste sub-systems. Specifically, closure relates to the amount of consumables mass that crosses the boundary of the vehicle over the lifetime of a mission. As will be demonstrated in this manuscript, the optimal level of loop closure is heavily dependent upon mission requirements such as duration and the level of extra- vehicular activity (EVA) performed. Sub-system level trades were also considered as a function of mission duration to assess when increased loop closure is practical. Although many additional factors will likely merit consideration in designing life support systems for future missions, the ESM results described herein provide a context for future architecture design decisions toward a flexible path
Parametric Analysis of Life Support Systems for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly S.; Bagdigian, Bob M.
2011-01-01
The National Aeronautics and Space Administration is in a process of evaluating future targets for space exploration. In order to maintain the welfare of a crew during future missions, a suite of life support technology is responsible for oxygen and water generation, carbon dioxide control, the removal of trace concentrations of organic contaminants, processing and recovery of water, and the storage and reclamation of solid waste. For each particular life support subsystem, a variety competing technologies either exist or are under aggressive development efforts. Each individual technology has strengths and weaknesses with regard to launch mass, power and cooling requirements, volume of hardware and consumables, and crew time requirements for operation. However, from a system level perspective, the favorability of each life support architecture is better assessed when the sub-system technologies are analyzed in aggregate. In order to evaluate each specific life support system architecture, the measure of equivalent system mass (ESM) was employed to benchmark system favorability. Moreover, the results discussed herein will be from the context of loop-closure with respect to the air, water, and waste sub-systems. Specifically, closure relates to the amount of consumables mass that crosses the boundary of the vehicle over the lifetime of a mission. As will be demonstrated in this manuscript, the optimal level of loop closure is heavily dependent upon mission requirements such as duration and the level of extra-vehicular activity (EVA) performed. Sub-system level trades were also considered as a function of mission duration to assess when increased loop closure is practical. Although many additional factors will likely merit consideration in designing life support systems for future missions, the ESM results described herein provide a context for future architecture design decisions toward a flexible path program.
Rowlett, J K; Massey, B W; Kleven, M S; Woolverton, W L
1996-06-01
The present study was designed to investigate parameters and quantitative analysis of cocaine self-administration under a progressive-ratio (PR) schedule of reinforcement, with the goal of enhancing the resolution of PR schedules for measuring reinforcing efficacy. Six rhesus monkeys were prepared with chronic intravenous catheters and trained to self-administer cocaine under a PR schedule. The schedule consisted of five components, each made up of four trials (i.e., 20 trials total). Each trial within a component had the same response requirement. Three initial response requirements were tested: fixed-ration (FR) 60, FR 120 and FR 240. The response requirements doubled in successive components to a maximum of FR 960, FR 1920 or R 3840, respectively, in the fifth component. A trial ended with an injection or the expiration of a 12- or 24-min limited hold (LH). The inter-trial interval (ITI) was 15 or 30 min. Four dependent measures were assessed: break point (last FR completed), injections/session, responses/session and response rate (responses/s). For the three initial FRs, the break point, number of injections/session, responses/session and rate increased with dose of cocaine (0.013-0.1 mg/kg per injection) at both ITI/LH values. At the ITI15/LH12, responding decreased at higher doses, i.e., the dose-response functions were biphasic. In contrast, at the ITI30/LH24, responding reached an asymptote at higher doses. In general, cocaine maintained significantly higher break points, injections/session, responses/session and rate at ITI30/LH24 than at ITI15/LH12. However, at both ITI/LHs, as initial FR was increased, injections/session at the higher doses decreased while break point, total responses/session and rate did not change. A ceiling on performance, as assessed by break point, total responses/session and response rate, may have limited the number of cocaine injections an animal could take in a session. The results of this study indicate that optimal conditions
A multiscale approach to InSAR time series analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y. N.; Dicaprio, C.; Rickerby, A.
2008-12-01
We describe a new technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale analysis of InSAR Time Series), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. This approach also permits a consistent treatment of all data independent of the presence of localized holes in any given interferogram. In essence, MInTS allows one to considers all data at the same time (as opposed to one pixel at a time), thereby taking advantage of both spatial and temporal characteristics of the deformation field. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to includes a set of general functions (e.g., splines) in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate our results by comparing with ground-based observations.
Kwon, Osung; Park, Kwang-Kyoon; Ra, Young-Sik; Kim, Yong-Su; Kim, Yoon-Ho
2013-10-21
Generation of time-bin entangled photon pairs requires the use of the Franson interferometer which consists of two spatially separated unbalanced Mach-Zehnder interferometers through which the signal and idler photons from spontaneous parametric down-conversion (SPDC) are made to transmit individually. There have been two SPDC pumping regimes where the scheme works: the narrowband regime and the double-pulse regime. In the narrowband regime, the SPDC process is pumped by a narrowband cw laser with the coherence length much longer than the path length difference of the Franson interferometer. In the double-pulse regime, the longitudinal separation between the pulse pair is made equal to the path length difference of the Franson interferometer. In this paper, we propose another regime by which the generation of time-bin entanglement is possible and demonstrate the scheme experimentally. In our scheme, differently from the previous approaches, the SPDC process is pumped by a cw multi-mode (i.e., short coherence length) laser and makes use of the coherence revival property of such a laser. The high-visibility two-photon Franson interference demonstrates clearly that high-quality time-bin entanglement source can be developed using inexpensive cw multi-mode diode lasers for various quantum communication applications.
NASA Astrophysics Data System (ADS)
Chen, Chin-Wei; Côté, Patrick; West, Andrew A.; Peng, Eric W.; Ferrarese, Laura
2010-11-01
We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly ~103 in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sérsic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be σ(BT )≈ 0.13 mag for the brightest galaxies, rising to ≈ 0.3 mag for galaxies at the faint end of our sample (BT ≈ 16). The distribution of axial ratios of low-mass ("dwarf") galaxies bears a strong resemblance to the one observed for the higher-mass ("giant") galaxies. The global structural parameters for the full galaxy sample—profile shape, effective radius, and mean surface brightness—are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a ~7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.
Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.
2010-11-15
We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly {approx}10{sup 3} in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be {sigma}(B{sub T}){approx} 0.13 mag for the brightest galaxies, rising to {approx} 0.3 mag for galaxies at the faint end of our sample (B{sub T} {approx} 16). The distribution of axial ratios of low-mass ('dwarf') galaxies bears a strong resemblance to the one observed for the higher-mass ('giant') galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a {approx}7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.
NASA Astrophysics Data System (ADS)
Wu, H.; Pollyea, R.
2016-12-01
Geological Carbon Sequestration (GCS) is considered as a key method for mitigating the adverse effects of steadily increasing atmospheric CO2 concentrations. Numerical simulation is one technique for better understanding the injection, migration and leakage of supercritical CO2 (scCO2) during GCS. At the field scale, capillary pressure (Pcap) is an important factor governing the subsurface movement of scCO2. Constitutive models of Pcap as a function of wetting phase saturation (Sw) are essential to field-scale GCS simulations; however, such Pcap models are based on core-scale laboratory measurements. As a result, there exists uncertainty in the application of laboratory-measured Pcap models to field-scale GCS simulations. In this study, a parametric analysis of commonly used van Genucthen Pcap model is undertaken to quantify the effects of variability in the model parameter space. The study focuses on two parameters: the non-wetting phase entry pressure (P0) and the pore-size distribution index (λ), the latter of which controls curvature of the Pcap model. A two-dimensional parameter space is selected that covers a wide range of laboratory-scale Pcap measurements in the scCO2-brine system, and scCO2 injection processes are modeled within a homogeneous sandstone reservoir over the complete parameter space. Simulation results demonstrate how changes in the Pcap model parameters influence scCO2 migration within the storage reservoir. Maximum injection pressure is largely insensitive to variability of Pcap model parameters; however, vertical scCO2 migration is strongly controlled by Pcap model parameter selection. Since vertical scCO2 migration is the key point to estimate scCO2 leakage risk through caprock sealing, these results illustrate the importance of Pcap model parameter selection in field-scale numerical models of GCS.
Moran, John L; Solomon, Patricia J
2007-06-01
In Part I, we reviewed graphical display and data summary, followed by a consideration of linear regression models. Generalised linear models, structured in terms of an exponential response distribution and link function, are now introduced, subsuming logistic and Poisson regression. Time-to-event ("survival") analysis is developed from basic principles of hazard rate, and survival, cumulative distribution and density functions. Semi-parametric (Cox) and parametric (accelerated failure time) regression models are contrasted. Time-series analysis is explicated in terms of trend, seasonal, and other cyclical and irregular components, and further illustrated by development of a classical Box-Jenkins ARMA (autoregressive moving average) model for monthly ICU-patient hospital mortality rates recorded over 11 years. Multilevel (random-effects) models and principles of meta-analysis are outlined, and the review concludes with a brief consideration of important statistical aspects of clinical trials: sample size determination, interim analysis and "early stopping".
Waentig, Larissa; Techritz, Sandra; Jakubowski, Norbert; Roos, Peter H
2013-11-07
The paper presents a new multi-parametric protein microarray embracing the multi-analyte capabilities of laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The combination of high throughput reverse phase protein microarrays with element tagged antibodies and LA-ICP-MS makes it possible to detect and quantify many proteins or biomarkers in multiple samples simultaneously. A proof of concept experiment is performed for the analysis of cytochromes particularly of cytochrome P450 enzymes, which play an important role in the metabolism of xenobiotics such as toxicants and drugs. With the aid of the LA-ICP-MS based multi-parametric reverse phase protein microarray it was possible to analyse 8 cytochromes in 14 different proteomes in one run. The methodology shows excellent detection limits in the lower amol range and a very good linearity of R(2) ≥ 0.9996 which is a prerequisite for the development of further quantification strategies.
Real-time analysis keratometer
NASA Technical Reports Server (NTRS)
Adachi, Iwao P. (Inventor); Adachi, Yoshifumi (Inventor); Frazer, Robert E. (Inventor)
1987-01-01
A computer assisted keratometer in which a fiducial line pattern reticle illuminated by CW or pulsed laser light is projected on a corneal surface through lenses, a prismoidal beamsplitter quarterwave plate, and objective optics. The reticle surface is curved as a conjugate of an ideal corneal curvature. The fiducial image reflected from the cornea undergoes a polarization shift through the quarterwave plate and beamsplitter whereby the projected and reflected beams are separated and directed orthogonally. The reflected beam fiducial pattern forms a moire pattern with a replica of the first recticle. This moire pattern contains transverse aberration due to differences in curvature between the cornea and the ideal corneal curvature. The moire pattern is analyzed in real time by computer which displays either the CW moire pattern or a pulsed mode analysis of the transverse aberration of the cornea under observation, in real time. With the eye focused on a plurality of fixation points in succession, a survey of the entire corneal topography is made and a contour map or three dimensional plot of the cornea can be made as a computer readout in addition to corneal radius and refractive power analysis.
Timing analysis by model checking
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
NASA Astrophysics Data System (ADS)
Yuan, S.; Fuji, N.; Singh, S. C.; Borisov, D.
2016-12-01
We present a novel methodology to invert seismic data locally through the combination of wavefield injection and extrapolation method. Seismic full waveform inversion has proved its promising resolving power in seismology community for these decades. However, the computational cost for 3D practical scale elastic or viscoelastic waveform inversion remains still challenging. The computational cost is much more severe for time-lapse surveys, which requires real-time model estimation on a daily or weekly basis. Besides, changes of the structures during time-lapse surveys are likely to occur within a smaller area, such as oil and gas reservoir or CO2 injection wells. We propose methods that effectively and quantitatively image the localized structure change relatively far from source and receiver arrays. We thus have to perform both forward modeling and waveform inversions inside the region that contain neither source nor receiver. Firstly, we look for the equivalent source expression inside the region of interest by wavefield injection method. Secondly, we extrapolate wavefield from physical receivers to an array of virtual receivers by using correlation-type representation theorem. In this paper, we present elastic 2D numerical examples of our methods and quantitatively evaluate errors of obtained models, in comparison with those from full-model inversions. The results show that the proposed localized waveform inversion is more efficient, accurate and robust even under existence of errors in both initial models and data.
Advantages and drawbacks of applying periodic time-variant modal analysis to spur gear dynamics
NASA Astrophysics Data System (ADS)
Pedersen, Rune; Santos, Ilmar F.; Hede, Ivan A.
2010-07-01
A simplified torsional model with a reduced number of degrees-of-freedom is used in order to investigate the potential of the technique. A time-dependent gear mesh stiffness function is introduced and expanded in a Fourier series. The necessary number of Fourier terms is determined in order to ensure sufficient accuracy of the results. The method of time-variant modal analysis is applied, and the changes in the fundamental and the parametric resonance frequencies as a function of the rotational speed of the gears, are found. By obtaining the stationary and parametric parts of the time-dependent modes shapes, the importance of the time-varying component relative to the stationary component is investigated and quantified. The method used for calculation and subsequent sorting of the left and right eigenvectors based on a first order Taylor expansion is explained. The advantages and drawbacks of applying the methodology to wind turbine gearboxes are addressed and elucidated.
Kong, Xiangrong; Mas, Valeria; Archer, Kellie J
2008-02-26
With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN) to those with normal functioning allograft. The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been reported to be relevant to renal diseases. Further study on the
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant
2011-03-01
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for Ca
Parametric Time-Dependent Navier-Stokes Computations for a YAV-8B Harrier in Ground Effect
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Pandya, Shishir; Ahmad, Jasim; Murman, Scott; Kwak, Dochan (Technical Monitor)
2002-01-01
The Harrier Jump Jet has the distinction of being the only powered-lift aircraft in the free world to achieve operational status and to have flown in combat. This V/STOL aircraft can take-off and land vertically or utilize very short runways by directing its four exhaust nozzles towards the ground. Transition to forward flight is achieved by rotating these nozzles into a horizontal position. Powered-lift vehicles have certain advantages over conventional strike fighters. Their V/STOL capabilities allow for safer carrier operations, smaller carrier size, and quick reaction time for troop support. Moreover, they are not dependent on vulnerable land-based runways. The AV-8A Harrier first entered service in the British Royal Air Force (RAF) during 1969, and the U.S. Marine Corps (USMC) in 1971. The AV-8B was a redesign to achieve improved payload capacity, range, and accuracy. This modified design first entered service with the USMC and RAF in 1985. The success and unique capabilities of the Harrier has prompted the design of a powered-lift version of the Joint Strike Fighter (JSF). The flowfield for the Harrier near the ground during low-speed or hover flight operations is very complex and time-dependent. A sketch of this flowfield is shown. Warm air from the fan is exhausted from the front nozzles, while a hot air/fuel mixture from the engine is exhausted from the rear nozzles. These jets strike the ground and move out radially forming a ground jet-flow. The ambient freestream, due to low-speed forward flight or - headwind during hover, opposes the jet-flow. This interaction causes the flow to separate and form a ground vortex. The multiple jets also interact with each other near the ground and form an upwash or jet fountain, which strikes the underside of the fuselage. If the aircraft is sufficiently close to the ground, the inlet can ingest ground debris and hot gases from the fountain and ground vortex. This Hot Gas Ingestion (HGI) can cause a sudden loss of
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Anderson, W. Kyle
1988-01-01
An upwind-biased implicit approximate factorization algorithm is applied to several steady and unsteady turbulent flows. The thin layer form of the compressible Navier-Stokes equation is used. Both the flux vector splitting and flux difference splitting methods are used to determine fluxes, and the results are compared. Flux difference splitting predicts results more accurately than flux vector splitting on a given mesh size, but, in its present implementation, is more severely limited by the maximum CFL number for unsteady time accurate flows. Physical aspects of the computations are also examined. An equilibrium turbulent boundary layer model computes generally better steady and unsteady results than a nonequilibrium model when there is little to no boundary layer separation. Conversely, when a significant region of separation exists, the nonequilibrium model performs in better agreement with experiment.
BETHSY ISP-38 parametric analysis and influences of the time step
Mavko, B.; Petelin, S.; Jurkovic, M.
1997-12-01
The BETHSY test 6.9c (also known as ISP OECD-38) includes loss of the residual heat removal system during midloop operation. Two manways--one on the steam generator primary side and one on top of the pressurizer--are opened 1 s after the transient initiated. The initial conditions are atmospheric pressure and temperatures {approximately}100{degrees}C in the primary system. The secondary system is full of air and isolated. Due to a lack of heat sink, the core starts to heat up, and when the temperature reaches 250{degrees}C at the top of the core, safety injection as gravity feed delivers water into one of the intact cold legs.
Comparison of nonparametric trend analysis according to the types of time series data
NASA Astrophysics Data System (ADS)
Heo, J.; Shin, H.; Kim, T.; Jang, H.; Kim, H.
2013-12-01
In the analysis of hydrological data, the determination of the existence of overall trend due to climate change has been a major concern and the important part of design and management of water resources for the future. The existence of trend could be identified by plotting hydrologic time series. However, statistical methods are more accurate and objective tools to perform trend analysis. Statistical methods divided into parametric and nonparametric methods. In the case of parametric method, the population should be assumed to be normally distributed. However, most of hydrological data tend to be represented by non-normal distribution, then the nonparametric method considered more suitable than parametric method. In this study, simulations were performed with different types of time series data and four nonparametric methods (Mann-Kendall test, Spearman's rho test, SEN test, and Hotelling-Pabst test) generally used in trend analysis were applied to assess the power of each trend analysis. The time series data were classified into three types which are Trend+Random, Trend+Cycle+Random, and Trend+Non-random. In order to add a change to the data, 11 kinds of different slopes were overlapped at each simulation. As the results, nonparametric methods have almost similar power for Trend+random type and Trend+Non-random series. On the other hand, Mann-Kendall and SEN tests have slightly higher power than Spearman's rho and Hotelling-Pabst tests for Trend+Cycle+Random series.
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.
1993-01-01
Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.
Parametric estimation of ultrasonic phase velocity and attenuation in dispersive media.
Martinsson, Jesper; Carlson, Johan E
2006-12-22
In ultrasonic characterization of liquids, gases, and solids, accurate estimation of frequency dependent attenuation and phase velocity is of great importance. Non-parametric methods, such as Fourier analysis, suffers from noise sensitivity, and the variance of the estimated quantities is limited by the signal-to-noise ratio. In this paper we present a parametric method for estimation of these properties. Pulse echo experiments in ethane, oxygen and mixtures of the two show that the proposed method can estimate phase velocity and attenuation with up to 50 times lower variance than standard non-parametric methods.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Zare, Najaf; Nouri, Bijan; Moradi, Fariba; Parvareh, Maryam
2017-01-01
Background: Time to first pregnancy (TTFP) has never been studied in an Iranian setting. Lifestyle, occupational and environmental factors have been suggested to affect the female reproduction. Objective: This study was conducted to measure TTFP in the south of Iran and survey the effects of several similar factors on TTFP by frailty models. Materials and Methods: The data on TTFP were available for 882 women who were randomly selected from the rural population (the south of Iran). Only the first and the planned pregnancies of every woman were included. The data were collected retrospectively by using self-administered questionnaires. Frailty and shared frailty models were used to determine which factors had the highest impact on TTFP. Results: The median TTFP was 6.4 months and several factors were surveyed. However, only the age of marriage, height, maternal education and regularity of menstruation prior to conception were selected in the multivariable models. Conclusion: Among the several factors which were included in the study, the result of frailty model showed that the height, age of marriage and regular menstruation seemed more notable predictors of TTFP. PMID:28280795
Zare, Najaf; Nouri, Bijan; Moradi, Fariba; Parvareh, Maryam
2017-01-01
Time to first pregnancy (TTFP) has never been studied in an Iranian setting. Lifestyle, occupational and environmental factors have been suggested to affect the female reproduction. This study was conducted to measure TTFP in the south of Iran and survey the effects of several similar factors on TTFP by frailty models. The data on TTFP were available for 882 women who were randomly selected from the rural population (the south of Iran). Only the first and the planned pregnancies of every woman were included. The data were collected retrospectively by using self-administered questionnaires. Frailty and shared frailty models were used to determine which factors had the highest impact on TTFP. The median TTFP was 6.4 months and several factors were surveyed. However, only the age of marriage, height, maternal education and regularity of menstruation prior to conception were selected in the multivariable models. Among the several factors which were included in the study, the result of frailty model showed that the height, age of marriage and regular menstruation seemed more notable predictors of TTFP.
Bayesian accelerated failure time analysis with application to veterinary epidemiology.
Bedrick, E J; Christensen, R; Johnson, W O
2000-01-30
Standard methods for analysing survival data with covariates rely on asymptotic inferences. Bayesian methods can be performed using simple computations and are applicable for any sample size. We propose a practical method for making prior specifications and discuss a complete Bayesian analysis for parametric accelerated failure time regression models. We emphasize inferences for the survival curve rather than regression coefficients. A key feature of the Bayesian framework is that model comparisons for various choices of baseline distribution are easily handled by the calculation of Bayes factors. Such comparisons between non-nested models are difficult in the frequentist setting. We illustrate diagnostic tools and examine the sensitivity of the Bayesian methods. Copyright 2000 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ma, Y. B.; Liu, J.; Ma, Y. Q.; Zhao, C.; Ju, L.; Blair, D. G.; Zhu, Z. H.
2017-07-01
Three-mode parametric instability is a threat to attaining design power levels in advanced gravitational wave detectors. The first observation of three-mode parametric instability in a long optical cavity revealed that instabilities could be suppressed by time variation of the mirror radius of curvature. In this paper, we present three dimensional finite element analysis of this thermo-acousto-optics system to determine whether thermal modulation could provide sufficient instability suppression without degrading time averaged optical performance. It is shown that deformations due to the time averaged heating profile on the mirror surface can be compensated by rear surface heating of the test mass. Results show that a heating source with a modulation amplitude of 1 W at 0.01 Hz is sufficient to stabilize an acoustic mode with parametric gain up to 3. The parametric gain suppression factor is linearly proportional to the peak modulation power.
NASA Astrophysics Data System (ADS)
Tieli, Zhang
It is recognized that detecting aero materials with Teraherz (THz) wave is one of potential effective methods. The possibility of generation for THz wave in quasi-phase-matching backward-wave parametric oscillation is discussed in this paper. The Quasi-phase-matching (QPM) technique is an important component in nonlinear optical frequency conversion, such as second harmonic generation (SHG), optical parametric oscillation (OPO) and optical parametric generation (OPG). The phase mismatching in QPM OPO could be compensated by the grating vector of periodically poled crystals. It was known that the vectors of the grating and interacting waves were along the same direction to generate near infrared and mid-infrared light in conventional QPM OPO. In this paper, the character of the backward-wave parametric oscillation was analyzed systemically for the application of THz wave generation. The single and double backward-wave parametric oscillators were proposed and the practicability of THz wave generation was discussed. The threshold and linewidth characters in the typical condition were theoretically analyzed. It can be concluded that the single backward-wave parametric oscillators in THz band could be realized in current technique. The idler wavelength tuning from 100µm to 1000µm could be achieved by tuning the period of PPLN from 12.6μm to 131.4μm at 140℃. And other QPM conditions are hardly achieved by the technique nowadays, as the periods of periodically poled crystals are at the level of sub-micron. The calculation shows that the linewidth in single backward-wave OPO is lower than that in forward-wave OPO by one or two orders of magnitude. Especially, the linewidth in single backward-wave OPO doesn’t increase rapidly near degenerate point. The threshold of the pump density in the single backward-wave OPO is about 109 W/cm2 when the length of PPLN is 50mm and the pump wavelength is 1.064μm.
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Siow, Y. K.; Marek, C. J.
2005-01-01
Today s modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. The major advantages associated with the addition of ITB are an increase in thermal efficiency and reduction in NOx emission. Lower temperature peak in the main combustor results in lower thermal NOx emission and lower amount of cooling air required. This study focuses on a parametric (on-design) cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The ITB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB. Visual Basic program, Microsoft Excel macrocode, and Microsoft Excel neuron code are used to facilitate Microsoft Excel software to plot engine performance versus engine design parameters. This program computes and plots the data sequentially without forcing users to open other types of plotting programs. A user s manual on how to use the program is also included in this report. Furthermore, this stand-alone program is written in conjunction with an off-design program which is an extension of this study. The computed result of a selected design
Mossahebi, Sina; Zhu, Simeng; Chen, Howard; Shmuylovich, Leonid; Ghosh, Erina; Kovács, Sándor J
2014-09-01
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, N.; Dicaprio, C. J.
2010-12-01
We present a technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale InSAR Time Series analysis), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. As opposed to single pixel InSAR time series techniques, MInTS takes advantage of both spatial and temporal characteristics of the deformation field. We use a weighting scheme which accounts for the presence of localized holes due to decorrelation or unwrapping errors in any given interferogram. We represent time-dependent deformation using a dictionary of general basis functions, capable of detecting both steady and transient processes. The estimation is regularized using a model resolution based smoothing so as to be able to capture rapid deformation where there are temporally dense radar acquisitions and to avoid oscillations during time periods devoid of acquisitions. MInTS also has the flexibility to explicitly parametrize known time-dependent processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). We use cross validation to choose the regularization penalty parameter in the inversion of for the time-dependent deformation field. We demonstrate MInTS using a set of 63 ERS-1/2 and 29 Envisat interferograms for Long Valley Caldera.
Parametrically defined differential equations
NASA Astrophysics Data System (ADS)
Polyanin, A. D.; Zhurov, A. I.
2017-01-01
The paper deals with nonlinear ordinary differential equations defined parametrically by two relations. It proposes techniques to reduce such equations, of the first or second order, to standard systems of ordinary differential equations. It obtains the general solution to some classes of nonlinear parametrically defined ODEs dependent on arbitrary functions. It outlines procedures for the numerical solution of the Cauchy problem for parametrically defined differential equations.
Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham
2016-01-01
A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons
NASA Astrophysics Data System (ADS)
Acomi, Nicoleta; Ancuţa, Cristian; Andrei, Cristian; Boştinǎ, Alina; Boştinǎ, Aurel
2016-12-01
Ships are mainly built to sail and transport cargo at sea. Environmental conditions and state of the sea are communicated to vessels through periodic weather forecasts. Despite officers being aware of the sea state, their sea time experience is a decisive factor when the vessel encounters severe environmental conditions. Another important factor is the loading condition of the vessel, which triggers different behaviour in similar marine environmental conditions. This paper aims to analyse the behaviour of a port container vessel in severe environmental conditions and to estimate the potential conditions of parametric roll resonance. Octopus software simulation is employed to simulate vessel motions under certain conditions of the sea, with possibility to analyse the behaviour of ships and the impact of high waves on ships due to specific wave encounter situations. The study should be regarded as a supporting tool during the decision making process.
Ebrahimzadeh, M
2003-12-15
Since its invention more than 40 years ago, the laser has become an indispensable optical tool, capable of transforming light from its naturally incoherent state to a highly coherent state in space and time. Yet, due to fundamental limitations, operation of the laser remains confined to restricted spectral and temporal regions. Nonlinear optics can overcome this limitation by allowing access to new spectral and temporal regimes through the exploitation of suitable dielectric materials in combination with the laser. In particular, optical parametric oscillators are versatile coherent light sources with unique flexibility that can provide optical radiation across an entire spectral range from the ultraviolet to the far-infrared and over all temporal scales from continuous wave to the ultrafast femtosecond domain.
Time averaging, ageing and delay analysis of financial time series
NASA Astrophysics Data System (ADS)
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Peterson, James T.
1999-12-01
Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network.
1980-08-01
three buildings were simulated for five climatological regions centered at Washington, DC; Charleston, SC; Los Angeles, CA; Columbia, MO; and Fort...Approh 8 2 EXPERIMENTAL SETUP ..................................................... 9 Simutlation Method 9 Buihling Selection 9 Climatic Regions 1...consumption in various climatic regions is not known, making it difficult to do a cost or energy analysis of a facility. Although the energy consumed by a
Stability analysis of a time-periodic 2-dof MEMS structure
NASA Astrophysics Data System (ADS)
Kniffka, Till Jochen; Welte, Johannes; Ecker, Horst
2012-11-01
Microelectromechanical systems (MEMS) are becoming important for all kinds of industrial applications. Among them are filters in communication devices, due to the growing demand for efficient and accurate filtering of signals. In recent developments single degree of freedom (1-dof) oscillators, that are operated at a parametric resonances, are employed for such tasks. Typically vibration damping is low in such MEM systems. While parametric excitation (PE) is used so far to take advantage of a parametric resonance, this contribution suggests to also exploit parametric anti-resonances in order to improve the damping behavior of such systems. Modeling aspects of a 2-dof MEM system and first results of the analysis of the non-linear and the linearized system are the focus of this paper. In principle the investigated system is an oscillating mechanical system with two degrees of freedom x = [x1x2]T that can be described by Mx+Cx+K1x+K3(x2)x+Fes(x,V(t)) = 0. The system is inherently non-linear because of the cubic mechanical stiffness K3 of the structure, but also because of electrostatic forces (1+cos(ωt))Fes(x) that act on the system. Electrostatic forces are generated by comb drives and are proportional to the applied time-periodic voltage V(t). These drives also provide the means to introduce time-periodic coefficients, i.e. parametric excitation (1+cos(ωt)) with frequency ω. For a realistic MEM system the coefficients of the non-linear set of differential equations need to be scaled for efficient numerical treatment. The final mathematical model is a set of four non-linear time-periodic homogeneous differential equations of first order. Numerical results are obtained from two different methods. The linearized time-periodic (LTP) system is studied by calculating the Monodromy matrix of the system. The eigenvalues of this matrix decide on the stability of the LTP-system. To study the unabridged non-linear system, the bifurcation software ManLab is employed
Parametric Resonance Revisited
NASA Astrophysics Data System (ADS)
van den Broeck, C.; Bena, I.
The phenomenon of parametric resonance is revisited. Several physical examples are reviewed and an exactly solvable model is discussed. A mean field theory is presented for globally coupled parametric oscillators with randomly distributed phases. A new type of collective instability appears, which is similar in nature to that of noise induced phase transitions.
1980-06-01
ALSO USED AS WORKING ARRAY FOR SCPE C *CCMPUTATIONS * C B ASE - MCDEL SCLUTION WITH CELTA FOR PARAPETRIZATICN * C E UAL TO ZERO. C - OFIGINAL DEPAND...C CX7hNL E C C PEAC 7H E NIMIP ?F CELTA POINTS ANC THE DELTA VALUES. P EAC (NF, 201 L L 9 EAC (NF, 321) (DEL(I), I a 1, LIL) C C FPINT TAeL ES FCR...FCAt’AT (GISTATISTICS ON PARAMETRIZED ROWS WITH CELTA al, 149 IS PEFCENT.’, ///I 8C4 FCFNAT (’ISTATISTICS ON PARAVETRIZEO COLUMNS WITH CELTA ul, 14, 1
Predicting analysis time in event-driven clinical trials with event-reporting lag.
Wang, Jianming; Ke, Chunlei; Jiang, Qi; Zhang, Charlie; Snapinn, Steven
2012-04-30
For a clinical trial with a time-to-event primary endpoint, the rate of accrual of the event of interest determines the timing of the analysis, upon which significant resources and strategic planning depend. It is important to be able to predict the analysis time early and accurately. Currently available methods use either parametric or nonparametric models to predict the analysis time based on accumulating information about enrollment, event, and study withdrawal rates and implicitly assume that the available data are completely reported at the time of performing the prediction. This assumption, however, may not be true when it takes a certain amount of time (i.e., event-reporting lag) for an event to be reported, in which case, the data are incomplete for prediction. Ignoring the event-reporting lag could substantially impact the accuracy of the prediction. In this paper, we describe a general parametric model to incorporate event-reporting lag into analysis time prediction. We develop a prediction procedure using a Bayesian method and provide detailed implementations for exponential distributions. Some simulations were performed to evaluate the performance of the proposed method. An application to an on-going clinical trial is also described. Copyright © 2012 John Wiley & Sons, Ltd.
Time Analysis for Probabilistic Workflows
Czejdo, Bogdan; Ferragut, Erik M
2012-01-01
There are many theoretical and practical results in the area of workflow modeling, especially when the more formal workflows are used. In this paper we focus on probabilistic workflows. We show algorithms for time computations in probabilistic workflows. With time of activities more precisely modeled, we can achieve improvement in the work cooperation and analyses of cooperation including simulation and visualization.
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y.; Dicaprio, C. J.
2009-12-01
We describe progress in the development of MInTS (Multiscale analysis of InSAR Time Series), an approach to constructed self-consistent time-dependent deformation observations from repeated satellite-based InSAR images of a given region. MInTS relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. In essence, MInTS allows one to considers all data at the same time as opposed to one pixel at a time, thereby taking advantage of both spatial and temporal characteristics of the deformation field. This approach also permits a consistent treatment of all data independent of the presence of localized holes due to unwrapping issues in any given interferogram. Specifically, the presence of holes is accounted for through a weighting scheme that accounts for the extent of actual data versus the area of holes associated with any given wavelet. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to include a set of general functions in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. We also experiment with the use of sparsity inducing regularization as a way to select from a large dictionary of time functions. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate
Real Time Data Analysis Techniques
NASA Astrophysics Data System (ADS)
Silberberg, George G.
1983-03-01
By the early 1970s, classical photo-optical range instrumentation technology (as a means of gathering weapons' system performance data) had become a costly and inefficient process. Film costs were increasing due to soaring silver prices. Time required to process, read, and produce optical data was becoming unacceptable as a means of supporting weapon system development programs. NWC investigated the feasibility of utilizing Closed Circuit Television (CCTV) technology as an alternative solution for providing optical data. In 1978 a program entitled Metric Video (measurements from video images) was formulated at the Naval Weapons Center, China Lake, California. The purpose of this program was to provide timely data, to reduce the number of operating personnel, and to lower data acquisition costs. Some of the task elements for this program included a near real-time vector miss-distance system, a weapons scoring system, a velocity measuring system, a time-space position system, and a system to replace film cameras for gathering real-time engineering sequential data. These task elements and the development of special hardware and techniques to achieve real-time data will be discussed briefly in this paper.
Colak, Ertugrul; Mutlu, Fezan; Bal, Cengiz; Oner, Setenay; Ozdamar, Kazim; Gok, Bulent; Cavusoglu, Yuksel
2012-01-01
We aimed to compare the performance of three different individual ROC methods (one from each of the broad categories of parametric, nonparametric and semiparametric analysis) for assessing continuous diagnostic tests: the binormal method as a parametric method, an empirical approach as a nonparametric method, and a semiparametric method using generalized linear models (GLM). We performed a simulation study with various sample sizes under normal, skewed, and monotone distributions. In the simulations, we used estimates of the ROC curve parameters a and b, estimates of the area under the curve (AUC), the standard errors and root mean square errors (RMSEs) of these estimates, and the 95% AUC confidence intervals for comparison. The three methodologies were also applied to an acute coronary syndrome dataset in which serum myoglobin levels were used as a biomarker for detecting acute coronary syndrome. The simulation and application studies suggest that the semiparametric ROC analysis using GLM is a reliable method when the distributions of the diagnostic test results are skewed and that it provides a smooth ROC curve for obtaining a unique cutoff value. A sample size of 50 is sufficient for applying the semiparametric ROC method. PMID:22844346
Model reduction techniques for fast blood flow simulation in parametrized geometries.
Manzoni, Andrea; Quarteroni, Alfio; Rozza, Gianluigi
2012-01-01
In this paper, we propose a new model reduction technique aimed at real-time blood flow simulations on a given family of geometrical shapes of arterial vessels. Our approach is based on the combination of a low-dimensional shape parametrization of the computational domain and the reduced basis method to solve the associated parametrized flow equations. We propose a preliminary analysis carried on a set of arterial vessel geometries, described by means of a radial basis functions parametrization. In order to account for patient-specific arterial configurations, we reconstruct the latter by solving a suitable parameter identification problem. Real-time simulation of blood flows are thus performed on each reconstructed parametrized geometry, by means of the reduced basis method. We focus on a family of parametrized carotid artery bifurcations, by modelling blood flows using Navier-Stokes equations and measuring distributed outputs such as viscous energy dissipation or vorticity. The latter are indexes that might be correlated with the assessment of pathological risks. The approach advocated here can be applied to a broad variety of (different) flow problems related with geometry/shape variation, for instance related with shape sensitivity analysis, parametric exploration and shape design. Copyright © 2011 John Wiley & Sons, Ltd.
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1984-01-01
The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.
A parametric approach for the estimation of the instantaneous speed of rotating machinery
NASA Astrophysics Data System (ADS)
Rodopoulos, Konstantinos; Yiakopoulos, Christos; Antoniadis, Ioannis
2014-02-01
A parametric method is proposed for the estimation of the instantaneous speed of rotating machines. The method belongs typically to the class of eigenvalue based parametric signal processing methods. The major advantage of parametric methods over frequency domain or time-frequency domain based methods, is their increased resolution and their reduced computational cost. Moreover, advantages of eigenvalue based methods over other parametric methods include their robustness to noise. Sensitivity analysis for the key parameters of the proposed method is performed, including the sampling frequency, the signal length and the robustness to noise. The effectiveness of the method is demonstrated in vibration measurements from a test rig during start-up and run-down, as well as during variations of the speed of a motorcycle engine. Compared to the Hilbert Transform and to the Discrete Energy Separation Algorithm (DESA), the proposed approach exhibits a better behavior, while it simultaneously presents computational simplicity, being able to be implemented analytically, even online.
Bonofiglio, Federico; Beyersmann, Jan; Schumacher, Martin; Koller, Michael; Schwarzer, Guido
2016-09-01
Meta-analysis of a survival endpoint is typically based on the pooling of hazard ratios (HRs). If competing risks occur, the HRs may lose translation into changes of survival probability. The cumulative incidence functions (CIFs), the expected proportion of cause-specific events over time, re-connect the cause-specific hazards (CSHs) to the probability of each event type. We use CIF ratios to measure treatment effect on each event type. To retrieve information on aggregated, typically poorly reported, competing risks data, we assume constant CSHs. Next, we develop methods to pool CIF ratios across studies. The procedure computes pooled HRs alongside and checks the influence of follow-up time on the analysis. We apply the method to a medical example, showing that follow-up duration is relevant both for pooled cause-specific HRs and CIF ratios. Moreover, if all-cause hazard and follow-up time are large enough, CIF ratios may reveal additional information about the effect of treatment on the cumulative probability of each event type. Finally, to improve the usefulness of such analysis, better reporting of competing risks data is needed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Experience with parametric binary dissection
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Parametric Binary Dissection (PBD) is a new algorithm that can be used for partitioning graphs embedded in 2- or 3-dimensional space. It partitions explicitly on the basis of nodes + (lambda)x(edges cut), where lambda is the ratio of time to communicate over an edge to the time to compute at a node. The new algorithm is faster than the original binary dissection algorithm and attempts to obtain better partitions than the older algorithm, which only takes nodes into account. The performance of parametric dissection with plain binary dissection on 3 large unstructured 3-d meshes obtained from computational fluid dynamics and on 2 random graphs were compared. It was showm that the new algorithm can usually yield partitions that are substantially superior, but that its performance is heavily dependent on the input data.
PHAZE. Parametric Hazard Function Estimation
Atwood, C.L.
1990-09-01
Phaze performs statistical inference calculations on a hazard function ( also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions.
A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale.
Mircioiu, Constantin; Atkinson, Jeffrey
2017-05-10
A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple "yes" or "no" but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with "large" (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 "very important" and 4 "essential"), a "score analysis" based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses.
Wide-band profile domain pulsar timing analysis
NASA Astrophysics Data System (ADS)
Lentati, L.; Kerr, M.; Dai, S.; Hobson, M. P.; Shannon, R. M.; Hobbs, G.; Bailes, M.; Bhat, N. D. Ramesh; Burke-Spolaor, S.; Coles, W.; Dempsey, J.; Lasky, P. D.; Levin, Y.; Manchester, R. N.; Osłowski, S.; Ravi, V.; Reardon, D. J.; Rosado, P. A.; Spiewak, R.; van Straten, W.; Toomey, L.; Wang, J.; Wen, L.; You, X.; Zhu, X.
2017-04-01
We extend profile domain pulsar timing to incorporate wide-band effects such as frequency-dependent profile evolution and broad-band shape variation in the pulse profile. We also incorporate models for temporal variations in both pulse width and in the separation in phase of the main pulse and interpulse. We perform the analysis with both nested sampling and Hamiltonian Monte Carlo methods. In the latter case, we introduce a new parametrization of the posterior that is extremely efficient in the low signal-to-noise regime and can be readily applied to a wide range of scientific problems. We apply this methodology to a series of simulations, and to between seven and nine years of observations for PSRs J1713+0747, J1744-1134 and J1909-3744 with frequency coverage that spans 700-3600 Mhz. We use a smooth model for profile evolution across the full frequency range, and compare smooth and piecewise models for the temporal variations in dispersion measure (DM). We find that the profile domain framework consistently results in improved timing precision compared to the standard analysis paradigm by as much as 40 per cent for timing parameters. Incorporating smoothness in the DM variations into the model further improves timing precision by as much as 30 per cent. For PSR J1713+0747, we also detect pulse shape variation uncorrelated between epochs, which we attribute to variation intrinsic to the pulsar at a level consistent with previously published analyses. Not accounting for this shape variation biases the measured arrival times at the level of ˜30 ns, the same order of magnitude as the expected shift due to gravitational waves in the pulsar timing band.
Spectral analysis of multiple time series
NASA Technical Reports Server (NTRS)
Dubman, M. R.
1972-01-01
Application of spectral analysis for mathematically determining relationship of random vibrations in structures and concurrent events in electric circuits, physiology, economics, and seismograms is discussed. Computer program for performing spectral analysis of multiple time series is described.
Frequency domain optical parametric amplification
Schmidt, Bruno E.; Thiré, Nicolas; Boivin, Maxime; Laramée, Antoine; Poitras, François; Lebrun, Guy; Ozaki, Tsuneyuki; Ibrahim, Heide; Légaré, François
2014-01-01
Today’s ultrafast lasers operate at the physical limits of optical materials to reach extreme performances. Amplification of single-cycle laser pulses with their corresponding octave-spanning spectra still remains a formidable challenge since the universal dilemma of gain narrowing sets limits for both real level pumped amplifiers as well as parametric amplifiers. We demonstrate that employing parametric amplification in the frequency domain rather than in time domain opens up new design opportunities for ultrafast laser science, with the potential to generate single-cycle multi-terawatt pulses. Fundamental restrictions arising from phase mismatch and damage threshold of nonlinear laser crystals are not only circumvented but also exploited to produce a synergy between increased seed spectrum and increased pump energy. This concept was successfully demonstrated by generating carrier envelope phase stable, 1.43 mJ two-cycle pulses at 1.8 μm wavelength. PMID:24805968
Mousavi, S M; Reihani, S N Seyed; Anvari, G; Anvari, M; Alinezhad, H G; Tabar, M Reza Rahimi
2017-07-06
We propose a nonlinear method for the analysis of the time series for the spatial position of a bead trapped in optical tweezers, which enables us to reconstruct its dynamical equation of motion. The main advantage of the method is that all the functions and parameters of the dynamics are determined directly (non-parametrically) from the measured series. It also allows us to determine, for the first time to our knowledge, the spatial-dependence of the diffusion coefficient of a bead in an optical trap, and to demonstrate that it is not in general constant. This is in contrast with the main assumption of the popularly-used power spectrum calibration method. The proposed method is validated via synthetic time series for the bead position with spatially-varying diffusion coefficients. Our detailed analysis of the measured time series reveals that the power spectrum analysis overestimates considerably the force constant.
Parametric spatiotemporal oscillation in reaction-diffusion systems.
Ghosh, Shyamolina; Ray, Deb Shankar
2016-03-01
We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions.
Parametric spatiotemporal oscillation in reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Ghosh, Shyamolina; Ray, Deb Shankar
2016-03-01
We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions.
NASA Astrophysics Data System (ADS)
Tramutoli, Valerio; Coviello, Irina; Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Lisi, Mariano; Makris, John P.; Paciello, Rossana; Pergola, Nicola; Satriano, Valeria; vallianatos, filippos
2015-04-01
Real-time integration of multi-parametric observations is expected to significantly contribute to the development of operational systems for time-Dependent Assessment of Seismic Hazard (t-DASH) and earthquake short term (from days to weeks) forecast. However a very preliminary step in this direction is the identification of those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated to the complex process of preparation of major earthquakes. In this paper one of these parameter (the Earth's emitted radiation in the Thermal Infra-Red spectral region) is considered for its possible correlation with M≥4 earthquakes occurred in Greece in between 2004 and 2013. The RST (Robust Satellite Technique) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. Taking into account physical models proposed for justifying the existence of a correlation among TIR anomalies and earthquakes occurrence, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability - CSEP - Project) have been defined to drive the correlation analysis process. The analysis shows that more than 93% of all identified SSTAs occur in the pre-fixed space-time window around (M≥4) earthquakes time and location of occurrence with a false positive rate smaller than 7%. Achieved results, and particularly the very low rate of false positives registered on a so long testing period, seems already sufficient (at least) to qualify TIR anomalies (identified by RST approach and RETIRA index) among the parameters to be considered in the framework of a multi-parametric approach to time-Dependent Assessment of
Casellas, J; Brito, L C
2017-10-01
This technical note presents the program PaGELL v.1.5 (Parametric Genetic Evaluation of Lifespan in Livestock), a flexible software program to analyze (right-censored) longevity data in livestock populations, with a special emphasis on the genetic evaluation of the breeding stock. This software relies on a parametric generalization of the proportional hazard model; more specifically, the baseline hazard function follows a Weibull process and flexibility is gained by including an additional time-dependent effect with the number of change points defined by the user. The program can accommodate 3 different sources of variation (i.e., systematic, permanent environmental, and additive genetic effects) and both fixed and time-dependent patterns (only for systematic and permanent environmental effects). Analyses are performed within a Bayesian context by sampling from the joint posterior distribution of the model, and model fit can be easily determined by the calculation of the deviance information criterion. Although this software has already been used on field data sets, its performance has been double-checked on simulated data set, and results are presented in this technical note. PaGELL v.1.5 was written in Fortran 95 language and, after compiling with the GNU Fortran Compiler v.4.7 and later, it has been tested in Windows, Linux, and MacOS operating systems (both 32- and 64-bit platforms). This program is available at http://www.casellas.info/files/pageII.zip. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mouzourides, P.; Kyprianou, A.; Neophytou, M. K.-A.
2013-12-01
Urban morphology characterization is crucial for the parametrization of boundary-layer development over urban areas. One complexity in such a characterization is the three-dimensional variation of the urban canopies and textures, which are customarily reduced to and represented by one-dimensional varying parametrization such as the aerodynamic roughness length and zero-plane displacement . The scope of the paper is to provide novel means for a scale-adaptive spatially-varying parametrization of the boundary layer by addressing this 3-D variation. Specifically, the 3-D variation of urban geometries often poses questions in the multi-scale modelling of air pollution dispersion and other climate or weather-related modelling applications that have not been addressed yet, such as: (a) how we represent urban attributes (parameters) appropriately for the multi-scale nature and multi-resolution basis of weather numerical models, (b) how we quantify the uniqueness of an urban database in the context of modelling urban effects in large-scale weather numerical models, and (c) how we derive the impact and influence of a particular building in pre-specified sub-domain areas of the urban database. We illustrate how multi-resolution analysis (MRA) addresses and answers the afore-mentioned questions by taking as an example the Central Business District of Oklahoma City. The selection of MRA is motivated by its capacity for multi-scale sampling; in the MRA the "urban" signal depicting a city is decomposed into an approximation, a representation at a higher scale, and a detail, the part removed at lower scales to yield the approximation. Different levels of approximations were deduced for the building height and planar packing density . A spatially-varying characterization with a scale-adaptive capacity is obtained for the boundary-layer parameters (aerodynamic roughness length and zero-plane displacement ) using the MRA-deduced results for the building height and the planar packing
NASA Astrophysics Data System (ADS)
Dhote, Yogesh; Thombre, Shashikant
2016-10-01
This paper presents the thermal performance of the proposed double flow natural convection solar air heater with in-built liquid (oil) sensible heat storage. Unused engine oil was used as thermal energy storage medium due to its good heat retaining capacity even at high temperatures without evaporation. The performance evaluation was carried out for a day of the month March for the climatic conditions of Nagpur (India). A self reliant computational model was developed using computational tool as C++. The program developed was self reliant and compute the performance parameters for any day of the year and would be used for major cities in India. The effect of change in storage oil quantity and the inclination (tilt angle) on the overall efficiency of the solar air heater was studied. The performance was tested initially at different storage oil quantities as 25, 50, 75 and 100 l for a plate spacing of 0.04 m with an inclination of 36o. It has been found that the solar air heater gives the best performance at a storage oil quantity of 50 l. The performance of the proposed solar air heater is further tested for various combinations of storage oil quantity (50, 75 and 100 l) and the inclination (0o, 15o, 30o, 45o, 60o, 75o, 90o). It has been found that the proposed solar air heater with in-built oil storage shows its best performance for the combination of 50 l storage oil quantity and 60o inclination. Finally the results of the parametric study was also presented in the form of graphs carried out for a fixed storage oil quantity of 25 l, plate spacing of 0.03 m and at an inclination of 36o to study the behaviour of various heat transfer and fluid flow parameters of the solar air heater.
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Jin, Rui-Bo; Sasaki, Masahide
2015-04-01
In spontaneous parametric down conversion (SPDC) based quantum information processing (QIP) experiments, there is a tradeoff between the coincidence count rates (i.e. the pumping power of the SPDC), which limits the rate of the protocol, and the visibility of the quantum interference, which limits the quality of the protocol. This tradeoff is mainly caused by the multi-photon pair emissions from the SPDCs. In theory, the problem is how to model the experiments without truncating these multi-photon emissions while including practical imperfections. In this paper, we establish a method to theoretically simulate SPDC-based QIPs which fully incorporates the effect of multi-photon emissions and various practical imperfections. The key ingredient in our method is the application of the characteristic function formalism which has been used in continuous variable QIPs. We apply our method to three examples, the Hong-Ou-Mandel interference and the Einstein-Podolsky-Rosen interference experiments, and the concatenated entanglement swapping protocol. For the first two examples, we show that our theoretical results quantitatively agree with the recent experimental results. Also we provide the closed expressions for these interference visibilities with the full multi-photon components and various imperfections. For the last example, we provide the general theoretical form of the concatenated entanglement swapping protocol in our method and show the numerical results up to five concatenations. Our method requires only a small computational resource (a few minutes by a commercially available computer), which was not possible in the previous theoretical approach. Our method will have applications in a wide range of SPDC-based QIP protocols with high accuracy and a reasonable computational resource.
NASA Astrophysics Data System (ADS)
Rezaee, Mousa; Jahangiri, Reza
2015-05-01
In this study, in the presence of supersonic aerodynamic loading, the nonlinear and chaotic vibrations and stability of a simply supported Functionally Graded Piezoelectric (FGP) rectangular plate with bonded piezoelectric layer have been investigated. It is assumed that the plate is simultaneously exposed to the effects of harmonic uniaxial in-plane force and transverse piezoelectric excitations and aerodynamic loading. It is considered that the potential distribution varies linearly through the piezoelectric layer thickness, and the aerodynamic load is modeled by the first order piston theory. The von-Karman nonlinear strain-displacement relations are used to consider the geometrical nonlinearity. Based on the Classical Plate Theory (CPT) and applying the Hamilton's principle, the nonlinear coupled partial differential equations of motion are derived. The Galerkin's procedure is used to reduce the equations of motion to nonlinear ordinary differential Mathieu equations. The validity of the formulation for analyzing the Limit Cycle Oscillation (LCO), aero-elastic stability boundaries is accomplished by comparing the results with those of the literature, and the convergence study of the FGP plate is performed. By applying the Multiple Scales Method, the case of 1:2 internal resonance and primary parametric resonance are taken into account and the corresponding averaged equations are derived and analyzed numerically. The results are provided to investigate the effects of the forcing/piezoelectric detuning parameter, amplitude of forcing/piezoelectric excitation and dynamic pressure, on the nonlinear dynamics and chaotic behavior of the FGP plate. It is revealed that under the certain conditions, due to the existence of bi-stable region of non-trivial solutions, system shows the hysteretic behavior. Moreover, in absence of airflow, it is observed that variation of control parameters leads to the multi periodic and chaotic motions.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Stochastic time series analysis of fetal heart-rate variability
NASA Astrophysics Data System (ADS)
Shariati, M. A.; Dripps, J. H.
1990-06-01
Fetal Heart Rate(FHR) is one of the important features of fetal biophysical activity and its long term monitoring is used for the antepartum(period of pregnancy before labour) assessment of fetal well being. But as yet no successful method has been proposed to quantitatively represent variety of random non-white patterns seen in FHR. Objective of this paper is to address this issue. In this study the Box-Jenkins method of model identification and diagnostic checking was used on phonocardiographic derived FHR(averaged) time series. Models remained exclusively autoregressive(AR). Kalman filtering in conjunction with maximum likelihood estimation technique forms the parametric estimator. Diagnosrics perfonned on the residuals indicated that a second order model may be adequate in capturing type of variability observed in 1 up to 2 mm data windows of FHR. The scheme may be viewed as a means of data reduction of a highly redundant information source. This allows a much more efficient transmission of FHR information from remote locations to places with facilities and expertise for doser analysis. The extracted parameters is aimed to reflect numerically the important FHR features. These are normally picked up visually by experts for their assessments. As a result long term FHR recorded during antepartum period could then be screened quantitatively for detection of patterns considered normal or abnonnal. 1.
A flexible time recording and time correlation analysis system
NASA Astrophysics Data System (ADS)
Shenhav, Nathan J.; Leiferman, Gabriel; Segal, Yitzhak; Notea, Amos
1983-02-01
A system was developed to digitize and record the time intervals between detection event pulses, feed to its input channels from a detection device. The accumulated data is transferred continuously in real time to a dise through a PDP 11/34 minicomputer. Even though the system was designed for a specific scope, i.e., the comparative study of passive neutron nondestructive assay methods. It can be characterized by its features as a general purpose time series recorder. The time correlation analysis is performed by software after completion of the data accumulation. The digitizing clock period is selectable and any value, larger than a minimum of 100 ns may be selected. Bursts of up to 128 events with a frequency up to 10 MHz may be recorded. With the present recorder-minicomputer combination, the maximal average recording frequency is 40 kHz.
Tan, Qihua; Zhao, J H; Iachine, I; Hjelmborg, J; Vach, W; Vaupel, J W; Christensen, K; Kruse, T A
2004-04-01
This report investigates the power issue in applying the non-parametric linkage analysis of affected sib-pairs (ASP) [Kruglyak and Lander, 1995: Am J Hum Genet 57:439-454] to localize genes that contribute to human longevity using long-lived sib-pairs. Data were simulated by introducing a recently developed statistical model for measuring marker-longevity associations [Yashin et al., 1999: Am J Hum Genet 65:1178-1193], enabling direct power comparison between linkage and association approaches. The non-parametric linkage (NPL) scores estimated in the region harboring the causal allele are evaluated to assess the statistical power for different genetic (allele frequency and risk) and heterogeneity parameters under various sampling schemes (age-cut and sample size). Based on the genotype-specific survival function, we derived a heritability calculation as an overall measurement for the effect of causal genes with different parameter settings so that the power can be compared for different modes (dominant, recessive) of inheritance. Our results show that the ASP approach is a powerful tool in mapping very strong effect genes, both dominant and recessive. To map a rare dominant genetic variation that reduces hazard of death by half, a large sample (above 600 pairs) with at least one extremely long-lived (over age 99) sib in each pair is needed. Again, with large sample size and high age cut-off, the method is able to localize recessive genes with relatively small effects, but the power is very limited in case of a dominant effect. Although the power issue may depend heavily on the true genetic nature in maintaining survival, our study suggests that results from small-scale sib-pair investigations should be referred with caution, given the complexity of human longevity. Copyright 2004 Wiley-Liss, Inc.
Nonlinear Analysis of Surface EMG Time Series
NASA Astrophysics Data System (ADS)
Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-04-01
Applications of nonlinear analysis of surface electromyography time series of patients with and without low back pain are presented. Limitations of the standard methods based on the power spectrum are discussed.
Hyun, Y; Lee, J S; Rha, J H; Lee, I K; Ha, C K; Lee, D S
2001-02-01
The purpose of this study was to investigate the differences between technetium-99m ethyl cysteinate dimer (99mTc-ECD) and technetium-99m hexamethylpropylene amine oxime (99mTc-HMPAO) uptake in the same brains by means of statistical parametric mapping (SPM) analysis. We examined 20 patients (9 male, 11 female, mean age 62+/-12 years) using 99mTc-ECD and 99mTc-HMPAO single-photon emission tomography (SPET) and magnetic resonance imaging (MRI) of the brain less than 7 days after onset of stroke. MRI showed no cortical infarctions. Infarctions in the pons (6 patients) and medulla (1), ischaemic periventricular white matter lesions (13) and lacunar infarction (7) were found on MRI. Split-dose and sequential SPET techniques were used for 99mTc-ECD and 99mTc-HMPAO brain SPET, without repositioning of the patient. All of the SPET images were spatially transformed to standard space, smoothed and globally normalized. The differences between the 99mTc-ECD and 99mTc-HMPAO SPET images were statistically analysed using statistical parametric mapping (SPM) 96 software. The difference between two groups was considered significant at a threshold of uncorrected P values less than 0.01. Visual analysis showed no hypoperfused areas on either 99mTc-ECD or 99mTc-HMPAO SPET images. SPM analysis revealed significantly different uptake of 99mTc-ECD and 99mTc-HMPAO in the same brains. On the 99mTc-ECD SPET images, relatively higher uptake was observed in the frontal, parietal and occipital lobes, in the left superior temporal lobe and in the superior region of the cerebellum. On the 99mTc-HMPAO SPET images, relatively higher uptake was observed in the medial temporal lobes, thalami, periventricular white matter and brain stem. These differences in uptake of the two tracers in the same brains on SPM analysis suggest that interpretation of cerebral perfusion is possible using SPET with 99mTc-ECD and 99mTc-HMPAO.
Parametric Differentiation and Integration
ERIC Educational Resources Information Center
Chen, Hongwei
2009-01-01
Parametric differentiation and integration under the integral sign constitutes a powerful technique for calculating integrals. However, this topic is generally not included in the undergraduate mathematics curriculum. In this note, we give a comprehensive review of this approach, and show how it can be systematically used to evaluate most of the…
Parametric Differentiation and Integration
ERIC Educational Resources Information Center
Chen, Hongwei
2009-01-01
Parametric differentiation and integration under the integral sign constitutes a powerful technique for calculating integrals. However, this topic is generally not included in the undergraduate mathematics curriculum. In this note, we give a comprehensive review of this approach, and show how it can be systematically used to evaluate most of the…
Jansat, J M; Lastra, C F; Mariño, E L
1998-06-01
The influence of different weighting methods in non-linear regression analysis was evaluated in the pharmacokinetics of carebastine after a single intravenous dose of 10 mg in 8 healthy volunteers. Plasma concentrations were measured by HPLC using an on-line solid-phase extraction method and automated injection. The analytical method was fully validated and the function of the analytical error subsequently determined. The parametric approach was performed using different weighting methods, including the homoscedastic method (W = 1) and heteroscedastic methods using weights of 1/C, 1/C2, and the inverse of the concentration variance calculated through the analytical error function (1/V), and the results were statistically evaluated according to the normal distribution. Statistically significant differences were observed in the representative parameters of the disposition kinetics of carebastine. The use of a multiple comparison test for statistical analysis of all differences among group means indicated that differences were generated between the homoscedastic method (W = 1) and the heteroscedastic methods (1/C, 1/C2, and 1/V). The results obtained in the present study confirmed the utility of the analytical error function as a weighting method in non-linear regression analysis and reinforced the importance of the correct choice of weights to avoid the estimation of imprecise or erroneous pharmacokinetic parameters.
Circulant Matrices and Time-Series Analysis
ERIC Educational Resources Information Center
Pollock, D. S. G.
2002-01-01
This paper sets forth some salient results in the algebra of circulant matrices which can be used in time-series analysis. It provides easy derivations of some results that are central to the analysis of statistical periodograms and empirical spectral density functions. A statistical test for the stationarity or homogeneity of empirical processes…
Circulant Matrices and Time-Series Analysis
ERIC Educational Resources Information Center
Pollock, D. S. G.
2002-01-01
This paper sets forth some salient results in the algebra of circulant matrices which can be used in time-series analysis. It provides easy derivations of some results that are central to the analysis of statistical periodograms and empirical spectral density functions. A statistical test for the stationarity or homogeneity of empirical processes…
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Analysis of soybean flowering-time genes
USDA-ARS?s Scientific Manuscript database
Control of soybean flowering time is important for geographic adaptation, and maximizing yield. RT-PCR analysis was performed using primers synthesized for a number of putative flowering-time genes based on homology of soybean EST and genomic sequences to Arabidopsis genes. RNA for cDNA synthesis ...
Fractal analysis of time varying data
Vo-Dinh, Tuan; Sadana, Ajit
2002-01-01
Characteristics of time varying data, such as an electrical signal, are analyzed by converting the data from a temporal domain into a spatial domain pattern. Fractal analysis is performed on the spatial domain pattern, thereby producing a fractal dimension D.sub.F. The fractal dimension indicates the regularity of the time varying data.
Multiscale InSAR Time Series (MInTS) analysis of surface deformation
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, Y. N.; Agram, P. S.; DiCaprio, C. J.
2011-12-01
We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, such that coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least-squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
Multiscale InSAR Time Series (MInTS) analysis of surface deformation
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Musé, P.; Simons, M.; Lin, Y. N.; Agram, P. S.; Dicaprio, C. J.
2012-02-01
We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
Short-time Lyapunov exponent analysis
NASA Technical Reports Server (NTRS)
Vastano, J. A.
1990-01-01
A new technique for analyzing complicated fluid flows in numerical simulations has been successfully tested. The analysis uses short time Lyapunov exponent contributions and the associated Lyapunov perturbation fields. A direct simulation of the Taylor-Couette flow just past the onset of chaos demonstrated that this new technique marks important times during the system evolution and identifies the important flow features at those times. This new technique will now be applied to a 'minimal' turbulent channel.
Heat Transfer Parametric System Identification
1993-06-01
Transfer Parametric System Identification 6. AUTHOR(S Parker, Gregory K. 7. PERFORMING ORGANIZATION NAME(S) AND AOORESS(ES) 8. PERFORMING ORGANIZATION...distribution is unlimited. Heat Transfer Parametric System Identification by Gregory K. Parker Lieutenant, United States Navy BS., DeVry Institute of...Modeling Concept ........ ........... 3 2. Lumped Parameter Approach ...... ......... 4 3. Parametric System Identification ....... 4 B. BASIC MODELING
Optimization of noncollinear optical parametric amplification
NASA Astrophysics Data System (ADS)
Schimpf, D. N.; Rothardt, J.; Limpert, J.; Tünnermann, A.
2007-02-01
Noncollinearly phase-matched optical parametric amplifiers (NOPAs) - pumped with the green light of a frequency doubled Yb-doped fiber-amplifier system 1, 2 - permit convenient generation of ultrashort pulses in the visible (VIS) and near infrared (NIR) 3. The broad bandwidth of the parametric gain via the noncollinear pump configuration allows amplification of few-cycle optical pulses when seeded with a spectrally flat, re-compressible signal. The short pulses tunable over a wide region in the visible permit transcend of frontiers in physics and lifescience. For instance, the resulting high temporal resolution is of significance for many spectroscopic techniques. Furthermore, the high magnitudes of the peak-powers of the produced pulses allow research in high-field physics. To understand the demands of noncollinear optical parametric amplification using a fiber pump source, it is important to investigate this configuration in detail 4. An analysis provides not only insight into the parametric process but also determines an optimal choice of experimental parameters for the objective. Here, the intention is to design a configuration which yields the shortest possible temporal pulse. As a consequence of this analysis, the experimental setup could be optimized. A number of aspects of optical parametric amplifier performance have been treated analytically and computationally 5, but these do not fully cover the situation under consideration here.
Nian, H.L.; Kuzay, T.M.; Sheng, I.C.
1996-09-01
A thermal buckling analysis for a diamond disk in the commissioning window assembly designed for x-ray beamlines at the Advanced Photon Source is presented. The analytical solution together with associated numerical analysis help to predict the critical temperature of the diamond disk before a thermal buckle occurs. {copyright} {ital 1996 American Institute of Physics.}
Entropic Analysis of Electromyography Time Series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Sung, Paul
2005-03-01
We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Delay differential analysis of time series.
Lainscsek, Claudia; Sejnowski, Terrence J
2015-03-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same