NASA Astrophysics Data System (ADS)
Fu, H. X.; Qian, Y. H.
In this paper, a modification of homotopy analysis method (HAM) is applied to study the two-degree-of-freedom coupled Duffing system. Firstly, the process of calculating the two-degree-of-freedom coupled Duffing system is presented. Secondly, the single periodic solutions and double periodic solutions are obtained by solving the constructed nonlinear algebraic equations. Finally, comparing the periodic solutions obtained by the multi-frequency homotopy analysis method (MFHAM) and the fourth-order Runge-Kutta method, it is found that the approximate solution agrees well with the numerical solution.
Computer simulation of magnetic resonance spectra employing homotopy.
Gates, K E; Griffin, M; Hanson, G R; Burrage, K
1998-11-01
Multidimensional homotopy provides an efficient method for accurately tracing energy levels and hence transitions in the presence of energy level anticrossings and looping transitions. Herein we describe the application and implementation of homotopy to the analysis of continuous wave electron paramagnetic resonance spectra. The method can also be applied to electron nuclear double resonance, electron spin echo envelope modulation, solid-state nuclear magnetic resonance, and nuclear quadrupole resonance spectra. Copyright 1998 Academic Press.
Global Study of the Simple Pendulum by the Homotopy Analysis Method
ERIC Educational Resources Information Center
Bel, A.; Reartes, W.; Torresi, A.
2012-01-01
Techniques are developed to find all periodic solutions in the simple pendulum by means of the homotopy analysis method (HAM). This involves the solution of the equations of motion in two different coordinate representations. Expressions are obtained for the cycles and periods of oscillations with a high degree of accuracy in the whole range of…
On the complexity of a combined homotopy interior method for convex programming
NASA Astrophysics Data System (ADS)
Yu, Bo; Xu, Qing; Feng, Guochen
2007-03-01
In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.
A homotopy analysis method for the nonlinear partial differential equations arising in engineering
NASA Astrophysics Data System (ADS)
Hariharan, G.
2017-05-01
In this article, we have established the homotopy analysis method (HAM) for solving a few partial differential equations arising in engineering. This technique provides the solutions in rapid convergence series with computable terms for the problems with high degree of nonlinear terms appearing in the governing differential equations. The convergence analysis of the proposed method is also discussed. Finally, we have given some illustrative examples to demonstrate the validity and applicability of the proposed method.
Homotopy method for optimization of variable-specific-impulse low-thrust trajectories
NASA Astrophysics Data System (ADS)
Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng
2017-11-01
The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
On accelerated flow of MHD powell-eyring fluid via homotopy analysis method
NASA Astrophysics Data System (ADS)
Salah, Faisal; Viswanathan, K. K.; Aziz, Zainal Abdul
2017-09-01
The aim of this article is to obtain the approximate analytical solution for incompressible magnetohydrodynamic (MHD) flow for Powell-Eyring fluid induced by an accelerated plate. Both constant and variable accelerated cases are investigated. Approximate analytical solution in each case is obtained by using the Homotopy Analysis Method (HAM). The resulting nonlinear analysis is carried out to generate the series solution. Finally, Graphical outcomes of different values of the material constants parameters on the velocity flow field are discussed and analyzed.
NASA Astrophysics Data System (ADS)
Sharma, Dinkar; Singh, Prince; Chauhan, Shubha
2017-06-01
In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).
A homotopy analysis method for the option pricing PDE in illiquid markets
NASA Astrophysics Data System (ADS)
E-Khatib, Youssef
2012-09-01
One of the shortcomings of the Black and Scholes model on option pricing is the assumption that trading the underlying asset does not affect the underlying asset price. This can happen in perfectly liquid markets and it is evidently not viable in markets with imperfect liquidity (illiquid markets). It is well-known that markets with imperfect liquidity are more realistic. Thus, the presence of price impact while studying options is very important. This paper investigates a solution for the option pricing PDE in illiquid markets using the homotopy analysis method.
A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.
Doroudi, Alireza
2012-01-01
In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.
NASA Astrophysics Data System (ADS)
Jia, Xiaofei
2018-06-01
Starting from the basic equations describing the evolution of the carriers and photons inside a semiconductor optical amplifier (SOA), the equation governing pulse propagation in the SOA is derived. By employing homotopy analysis method (HAM), a series solution for the output pulse by the SOA is obtained, which can effectively characterize the temporal features of the nonlinear process during the pulse propagation inside the SOA. Moreover, the analytical solution is compared with numerical simulations with a good agreement. The theoretical results will benefit the future analysis of other problems related to the pulse propagation in the SOA.
A monolithic homotopy continuation algorithm with application to computational fluid dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.; Zingg, David W.
2016-09-01
A new class of homotopy continuation methods is developed suitable for globalizing quasi-Newton methods for large sparse nonlinear systems of equations. The new continuation methods, described as monolithic homotopy continuation, differ from the classical predictor-corrector algorithm in that the predictor and corrector phases are replaced with a single phase which includes both a predictor and corrector component. Conditional convergence and stability are proved analytically. Using a Laplacian-like operator to construct the homotopy, the new algorithm is shown to be more efficient than the predictor-corrector homotopy continuation algorithm as well as an implementation of the widely-used pseudo-transient continuation algorithm for some inviscid and turbulent, subsonic and transonic external aerodynamic flows over the ONERA M6 wing and the NACA 0012 airfoil using a parallel implicit Newton-Krylov finite-difference flow solver.
Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Abuasad, Salah; Hashim, Ishak
2018-04-01
In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.
Experiments with conjugate gradient algorithms for homotopy curve tracking
NASA Technical Reports Server (NTRS)
Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.
1991-01-01
There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
Series Expansion of Functions with He's Homotopy Perturbation Method
ERIC Educational Resources Information Center
Khattri, Sanjay Kumar
2012-01-01
Finding a series expansion, such as Taylor series, of functions is an important mathematical concept with many applications. Homotopy perturbation method (HPM) is a new, easy to use and effective tool for solving a variety of mathematical problems. In this study, we present how to apply HPM to obtain a series expansion of functions. Consequently,…
Application of the Homotopy Perturbation Method to the Nonlinear Pendulum
ERIC Educational Resources Information Center
Belendez, A.; Hernandez, A.; Belendez, T.; Marquez, A.
2007-01-01
The homotopy perturbation method is used to solve the nonlinear differential equation that governs the nonlinear oscillations of a simple pendulum, and an approximate expression for its period is obtained. Only one iteration leads to high accuracy of the solutions and the relative error for the approximate period is less than 2% for amplitudes as…
Saranya, K; Mohan, V; Kizek, R; Fernandez, C; Rajendran, L
2018-02-01
The theory of glucose-responsive composite membranes for the planar diffusion and reaction process is extended to a microsphere membrane. The theoretical model of glucose oxidation and hydrogen peroxide production in the chitosan-aliginate microsphere has been discussed in this manuscript for the first time. We have successfully reported an analytical derived methodology utilizing homotopy perturbation to perform the numerical simulation. The influence and sensitive analysis of various parameters on the concentrations of gluconic acid and hydrogen peroxide are also discussed. The theoretical results enable to predict and optimize the performance of enzyme kinetics.
NASA Astrophysics Data System (ADS)
Zolfaghari, M.; Ghaderi, R.; Sheikhol Eslami, A.; Ranjbar, A.; Hosseinnia, S. H.; Momani, S.; Sadati, J.
2009-10-01
The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.
Discrete homotopy analysis for optimal trading execution with nonlinear transient market impact
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Gatheral, Jim; Lillo, Fabrizio
2016-10-01
Optimal execution in financial markets is the problem of how to trade a large quantity of shares incrementally in time in order to minimize the expected cost. In this paper, we study the problem of the optimal execution in the presence of nonlinear transient market impact. Mathematically such problem is equivalent to solve a strongly nonlinear integral equation, which in our model is a weakly singular Urysohn equation of the first kind. We propose an approach based on Homotopy Analysis Method (HAM), whereby a well behaved initial trading strategy is continuously deformed to lower the expected execution cost. Specifically, we propose a discrete version of the HAM, i.e. the DHAM approach, in order to use the method when the integrals to compute have no closed form solution. We find that the optimal solution is front loaded for concave instantaneous impact even when the investor is risk neutral. More important we find that the expected cost of the DHAM strategy is significantly smaller than the cost of conventional strategies.
High Order Accurate Algorithms for Shocks, Rapidly Changing Solutions and Multiscale Problems
2014-11-13
for front propagation with obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations ...obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations , sound generation study via... detonation waves, Combustion and Flame, (02 2013): 0. doi: 10.1016/j.combustflame.2012.10.002 Yang Yang, Ishani Roy, Chi-Wang Shu, Li-Zhi Fang. THE
Analytical solution of the nonlinear diffusion equation
NASA Astrophysics Data System (ADS)
Shanker Dubey, Ravi; Goswami, Pranay
2018-05-01
In the present paper, we derive the solution of the nonlinear fractional partial differential equations using an efficient approach based on the q -homotopy analysis transform method ( q -HATM). The fractional diffusion equations derivatives are considered in Caputo sense. The derived results are graphically demonstrated as well.
Linear homotopy solution of nonlinear systems of equations in geodesy
NASA Astrophysics Data System (ADS)
Paláncz, Béla; Awange, Joseph L.; Zaletnyik, Piroska; Lewis, Robert H.
2010-01-01
A fundamental task in geodesy is solving systems of equations. Many geodetic problems are represented as systems of multivariate polynomials. A common problem in solving such systems is improper initial starting values for iterative methods, leading to convergence to solutions with no physical meaning, or to convergence that requires global methods. Though symbolic methods such as Groebner bases or resultants have been shown to be very efficient, i.e., providing solutions for determined systems such as 3-point problem of 3D affine transformation, the symbolic algebra can be very time consuming, even with special Computer Algebra Systems (CAS). This study proposes the Linear Homotopy method that can be implemented easily in high-level computer languages like C++ and Fortran that are faster than CAS by at least two orders of magnitude. Using Mathematica, the power of Homotopy is demonstrated in solving three nonlinear geodetic problems: resection, GPS positioning, and affine transformation. The method enlarging the domain of convergence is found to be efficient, less sensitive to rounding of numbers, and has lower complexity compared to other local methods like Newton-Raphson.
Reck, Kasper; Thomsen, Erik V; Hansen, Ole
2011-01-31
The scalar wave equation, or Helmholtz equation, describes within a certain approximation the electromagnetic field distribution in a given system. In this paper we show how to solve the Helmholtz equation in complex geometries using conformal mapping and the homotopy perturbation method. The solution of the mapped Helmholtz equation is found by solving an infinite series of Poisson equations using two dimensional Fourier series. The solution is entirely based on analytical expressions and is not mesh dependent. The analytical results are compared to a numerical (finite element method) solution.
Piecewise-homotopy analysis method (P-HAM) for first order nonlinear ODE
NASA Astrophysics Data System (ADS)
Chin, F. Y.; Lem, K. H.; Chong, F. S.
2013-09-01
In homotopy analysis method (HAM), the determination for the value of the auxiliary parameter h is based on the valid region of the h-curve in which the horizontal segment of the h-curve will decide the valid h-region. All h-value taken from the valid region, provided that the order of deformation is large enough, will in principle yield an approximation series that converges to the exact solution. However it is found out that the h-value chosen within this valid region does not always promise a good approximation under finite order. This paper suggests an improved method called Piecewise-HAM (P-HAM). In stead of a single h-value, this method suggests using many h-values. Each of the h-values comes from an individual h-curve while each h-curve is plotted by fixing the time t at a different value. Each h-value is claimed to produce a good approximation only about a neighborhood centered at the corresponding t which the h-curve is based on. Each segment of these good approximations is then joined to form the approximation curve. By this, the convergence region is enhanced further. The P-HAM is illustrated and supported by examples.
NASA Astrophysics Data System (ADS)
Roul, Pradip; Warbhe, Ujwal
2017-08-01
The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
A pertinent approach to solve nonlinear fuzzy integro-differential equations.
Narayanamoorthy, S; Sathiyapriya, S P
2016-01-01
Fuzzy integro-differential equations is one of the important parts of fuzzy analysis theory that holds theoretical as well as applicable values in analytical dynamics and so an appropriate computational algorithm to solve them is in essence. In this article, we use parametric forms of fuzzy numbers and suggest an applicable approach for solving nonlinear fuzzy integro-differential equations using homotopy perturbation method. A clear and detailed description of the proposed method is provided. Our main objective is to illustrate that the construction of appropriate convex homotopy in a proper way leads to highly accurate solutions with less computational work. The efficiency of the approximation technique is expressed via stability and convergence analysis so as to guarantee the efficiency and performance of the methodology. Numerical examples are demonstrated to verify the convergence and it reveals the validity of the presented numerical technique. Numerical results are tabulated and examined by comparing the obtained approximate solutions with the known exact solutions. Graphical representations of the exact and acquired approximate fuzzy solutions clarify the accuracy of the approach.
NASA Astrophysics Data System (ADS)
Pandey, Rishi Kumar; Mishra, Hradyesh Kumar
2017-11-01
In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.
BFV-Complex and Higher Homotopy Structures
NASA Astrophysics Data System (ADS)
Schätz, Florian
2009-03-01
We present a connection between the BFV-complex (abbreviation for Batalin-Fradkin-Vilkovisky complex) and the strong homotopy Lie algebroid associated to a coisotropic submanifold of a Poisson manifold. We prove that the latter structure can be derived from the BFV-complex by means of homotopy transfer along contractions. Consequently the BFV-complex and the strong homotopy Lie algebroid structure are L ∞ quasi-isomorphic and control the same formal deformation problem. However there is a gap between the non-formal information encoded in the BFV-complex and in the strong homotopy Lie algebroid respectively. We prove that there is a one-to-one correspondence between coisotropic submanifolds given by graphs of sections and equivalence classes of normalized Maurer-Cartan elemens of the BFV-complex. This does not hold if one uses the strong homotopy Lie algebroid instead.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
NASA Astrophysics Data System (ADS)
Sravanthi, C. S.; Gorla, R. S. R.
2018-02-01
The aim of this paper is to study the effects of chemical reaction and heat source/sink on a steady MHD (magnetohydrodynamic) two-dimensional mixed convective boundary layer flow of a Maxwell nanofluid over a porous exponentially stretching sheet in the presence of suction/blowing. Convective boundary conditions of temperature and nanoparticle concentration are employed in the formulation. Similarity transformations are used to convert the governing partial differential equations into non-linear ordinary differential equations. The resulting non-linear system has been solved analytically using an efficient technique, namely: the homotopy analysis method (HAM). Expressions for velocity, temperature and nanoparticle concentration fields are developed in series form. Convergence of the constructed solution is verified. A comparison is made with the available results in the literature and our results are in very good agreement with the known results. The obtained results are presented through graphs for several sets of values of the parameters and salient features of the solutions are analyzed. Numerical values of the local skin-friction, Nusselt number and nanoparticle Sherwood number are computed and analyzed.
NASA Astrophysics Data System (ADS)
Shi, Yu; Wang, Yue; Xu, Shijie
2018-04-01
The motion of a massless particle in the gravity of a binary asteroid system, referred as the restricted full three-body problem (RF3BP), is fundamental, not only for the evolution of the binary system, but also for the design of relevant space missions. In this paper, equilibrium points and associated periodic orbit families in the gravity of a binary system are investigated, with the binary (66391) 1999 KW4 as an example. The polyhedron shape model is used to describe irregular shapes and corresponding gravity fields of the primary and secondary of (66391) 1999 KW4, which is more accurate than the ellipsoid shape model in previous studies and provides a high-fidelity representation of the gravitational environment. Both of the synchronous and non-synchronous states of the binary system are considered. For the synchronous binary system, the equilibrium points and their stability are determined, and periodic orbit families emanating from each equilibrium point are generated by using the shooting (multiple shooting) method and the homotopy method, where the homotopy function connects the circular restricted three-body problem and RF3BP. In the non-synchronous binary system, trajectories of equivalent equilibrium points are calculated, and the associated periodic orbits are obtained by using the homotopy method, where the homotopy function connects the synchronous and non-synchronous systems. Although only the binary (66391) 1999 KW4 is considered, our methods will also be well applicable to other binary systems with polyhedron shape data. Our results on equilibrium points and associated periodic orbits provide general insights into the dynamical environment and orbital behaviors in proximity of small binary asteroids and enable the trajectory design and mission operations in future binary system explorations.
Calculation of the neutron diffusion equation by using Homotopy Perturbation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koklu, H., E-mail: koklu@gantep.edu.tr; Ozer, O.; Ersoy, A.
The distribution of the neutrons in a nuclear fuel element in the nuclear reactor core can be calculated by the neutron diffusion theory. It is the basic and the simplest approximation for the neutron flux function in the reactor core. In this study, the neutron flux function is obtained by the Homotopy Perturbation Method (HPM) that is a new and convenient method in recent years. One-group time-independent neutron diffusion equation is examined for the most solved geometrical reactor core of spherical, cubic and cylindrical shapes, in the frame of the HPM. It is observed that the HPM produces excellent resultsmore » consistent with the existing literature.« less
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Minimizing Higgs potentials via numerical polynomial homotopy continuation
NASA Astrophysics Data System (ADS)
Maniatis, M.; Mehta, D.
2012-08-01
The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.
Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
NASA Astrophysics Data System (ADS)
Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu
2017-10-01
Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.
Narayanamoorthy, S; Sathiyapriya, S P
2016-01-01
In this article, we focus on linear and nonlinear fuzzy Volterra integral equations of the second kind and we propose a numerical scheme using homotopy perturbation method (HPM) to obtain fuzzy approximate solutions to them. To facilitate the benefits of this proposal, an algorithmic form of the HPM is also designed to handle the same. In order to illustrate the potentiality of the approach, two test problems are offered and the obtained numerical results are compared with the existing exact solutions and are depicted in terms of plots to reveal its precision and reliability.
A new approach to exact optical soliton solutions for the nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Morales-Delgado, V. F.; Gómez-Aguilar, J. F.; Baleanu, Dumitru
2018-05-01
By using the modified homotopy analysis transform method, we construct the analytical solutions of the space-time generalized nonlinear Schrödinger equation involving a new fractional conformable derivative in the Liouville-Caputo sense and the fractional-order derivative with the Mittag-Leffler law. Employing theoretical parameters, we present some numerical simulations and compare the solutions obtained.
Approximating a nonlinear advanced-delayed equation from acoustics
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-10-01
We approximate the solution of a particular non-linear mixed type functional differential equation from physiology, the mucosal wave model of the vocal oscillation during phonation. The mathematical equation models a superficial wave propagating through the tissues. The numerical scheme is adapted from the work presented in [1, 2, 3], using homotopy analysis method (HAM) to solve the non linear mixed type equation under study.
Mabood, Fazle; Khan, Waqar A; Ismail, Ahmad Izani Md
2013-01-01
In this article, an approximate analytical solution of flow and heat transfer for a viscoelastic fluid in an axisymmetric channel with porous wall is presented. The solution is obtained through the use of a powerful method known as Optimal Homotopy Asymptotic Method (OHAM). We obtained the approximate analytical solution for dimensionless velocity and temperature for various parameters. The influence and effect of different parameters on dimensionless velocity, temperature, friction factor, and rate of heat transfer are presented graphically. We also compared our solution with those obtained by other methods and it is found that OHAM solution is better than the other methods considered. This shows that OHAM is reliable for use to solve strongly nonlinear problems in heat transfer phenomena.
Mabood, Fazle; Khan, Waqar A.; Ismail, Ahmad Izani
2013-01-01
In this article, an approximate analytical solution of flow and heat transfer for a viscoelastic fluid in an axisymmetric channel with porous wall is presented. The solution is obtained through the use of a powerful method known as Optimal Homotopy Asymptotic Method (OHAM). We obtained the approximate analytical solution for dimensionless velocity and temperature for various parameters. The influence and effect of different parameters on dimensionless velocity, temperature, friction factor, and rate of heat transfer are presented graphically. We also compared our solution with those obtained by other methods and it is found that OHAM solution is better than the other methods considered. This shows that OHAM is reliable for use to solve strongly nonlinear problems in heat transfer phenomena. PMID:24376722
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
Higher Inductive Types as Homotopy-Initial Algebras
2016-08-01
Higher Inductive Types as Homotopy-Initial Algebras Kristina Sojakova CMU-CS-16-125 August 2016 School of Computer Science Carnegie Mellon University...talk at the Workshop on Logic, Language, Information and Computation (WoLLIC 2011). 1, 2.1 [38] M. Warren. Homotopy-Theoretic Aspects of Constructive Type Theory. PhD thesis, Carnegie Mellon University, 2008. 1 143
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Topological quantum computation of the Dold-Thom functor
NASA Astrophysics Data System (ADS)
Ospina, Juan
2014-05-01
A possible topological quantum computation of the Dold-Thom functor is presented. The method that will be used is the following: a) Certain 1+1-topological quantum field theories valued in symmetric bimonoidal categories are converted into stable homotopical data, using a machinery recently introduced by Elmendorf and Mandell; b) we exploit, in this framework, two recent results (independent of each other) on refinements of Khovanov homology: our refinement into a module over the connective k-theory spectrum and a stronger result by Lipshitz and Sarkar refining Khovanov homology into a stable homotopy type; c) starting from the Khovanov homotopy the Dold-Thom functor is constructed; d) the full construction is formulated as a topological quantum algorithm. It is conjectured that the Jones polynomial can be described as the analytical index of certain Dirac operator defined in the context of the Khovanov homotopy using the Dold-Thom functor. As a line for future research is interesting to study the corresponding supersymmetric model for which the Khovanov-Dirac operator plays the role of a supercharge.
Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis
NASA Technical Reports Server (NTRS)
Whorton, M.; Buschek, H.; Calise, A. J.
1994-01-01
A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.
NASA Astrophysics Data System (ADS)
Zoka, Yoshifumi; Yorino, Naoto; Kawano, Koki; Suenari, Hiroyasu
This paper proposes a fast computation method for Available Transfer Capability (ATC) with respect to thermal and voltage magnitude limits. In the paper, ATC is formulated as an optimization problem. In order to obtain the efficiency for the N-1 outage contingency calculations, linear sensitivity methods are applied for screening and ranking all contingency selections with respect to the thermal and voltage magnitude limits margin to identify the severest case. In addition, homotopy functions are used for the generator QV constrains to reduce the maximum error of the linear estimation. Then, the Primal-Dual Interior Point Method (PDIPM) is used to solve the optimization problem for the severest case only, in which the solutions of ATC can be obtained efficiently. The effectiveness of the proposed method is demonstrated through IEEE 30, 57, 118-bus systems.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
Tripathi, Rajnee; Mishra, Hradyesh Kumar
2016-01-01
In this communication, we describe the Homotopy Perturbation Method with Laplace Transform (LT-HPM), which is used to solve the Lane-Emden type differential equations. It's very difficult to solve numerically the Lane-Emden types of the differential equation. Here we implemented this method for two linear homogeneous, two linear nonhomogeneous, and four nonlinear homogeneous Lane-Emden type differential equations and use their appropriate comparisons with exact solutions. In the current study, some examples are better than other existing methods with their nearer results in the form of power series. The Laplace transform used to accelerate the convergence of power series and the results are shown in the tables and graphs which have good agreement with the other existing method in the literature. The results show that LT-HPM is very effective and easy to implement.
NASA Astrophysics Data System (ADS)
Arslanturk, Cihat
2011-02-01
Although tapered fins transfer more rate of heat per unit volume, they are not found in every practical application because of the difficulty in manufacturing and fabrications. Therefore, there is a scope to modify the geometry of a constant thickness fin in view of the less difficulty in manufacturing and fabrication as well as betterment of heat transfer rate per unit volume of the fin material. For the better utilization of fin material, it is proposed a modified geometry of new fin with a step change in thickness (SF) in the literature. In the present paper, the homotopy perturbation method has been used to evaluate the temperature distribution within the straight radiating fins with a step change in thickness and variable thermal conductivity. The temperature profile has an abrupt change in the temperature gradient where the step change in thickness occurs and thermal conductivity parameter describing the variation of thermal conductivity has an important role on the temperature profile and the heat transfer rate. The optimum geometry which maximizes the heat transfer rate for a given fin volume has been found. The derived condition of optimality gives an open choice to the designer.
NASA Astrophysics Data System (ADS)
Fatahi-Vajari, A.; Azimzadeh, Z.
2018-05-01
This paper investigates the nonlinear axial vibration of single-walled carbon nanotubes (SWCNTs) based on Homotopy perturbation method (HPM). A second order partial differential equation that governs the nonlinear axial vibration for such nanotubes is derived using doublet mechanics (DM) theory. To obtain the nonlinear natural frequency in axial vibration mode, this nonlinear equation is solved using HPM. The influences of some commonly used boundary conditions, amplitude of vibration, changes in vibration modes and variations of the nanotubes geometrical parameters on the nonlinear axial vibration characteristics of SWCNTs are discussed. It was shown that unlike the linear one, the nonlinear natural frequency is dependent to maximum vibration amplitude. Increasing the maximum vibration amplitude decreases the natural frequency of vibration compared to the predictions of the linear models. However, with increase in tube length, the effect of the amplitude on the natural frequency decreases. It was also shown that the amount and variation of nonlinear natural frequency is more apparent in higher mode vibration and two clamped boundary conditions. To show the accuracy and capability of this method, the results obtained herein were compared with the fourth order Runge-Kuta numerical results and good agreement was observed. It is notable that the results generated herein are new and can be served as a benchmark for future works.
Open-Closed Homotopy Algebras and Strong Homotopy Leibniz Pairs Through Koszul Operad Theory
NASA Astrophysics Data System (ADS)
Hoefel, Eduardo; Livernet, Muriel
2012-08-01
Open-closed homotopy algebras (OCHA) and strong homotopy Leibniz pairs (SHLP) were introduced by Kajiura and Stasheff in 2004. In an appendix to their paper, Markl observed that an SHLP is equivalent to an algebra over the minimal model of a certain operad, without showing that the operad is Koszul. In the present paper, we show that both OCHA and SHLP are algebras over the minimal model of the zeroth homology of two versions of the Swiss-cheese operad and prove that these two operads are Koszul. As an application, we show that the OCHA operad is non-formal as a 2-colored operad but is formal as an algebra in the category of 2-collections.
A Nonlinear differential equation model of Asthma effect of environmental pollution using LHAM
NASA Astrophysics Data System (ADS)
Joseph, G. Arul; Balamuralitharan, S.
2018-04-01
In this paper, we investigated a nonlinear differential equation mathematical model to study the spread of asthma in the environmental pollutants from industry and mainly from tobacco smoke from smokers in different type of population. Smoking is the main cause to spread Asthma in the environment. Numerical simulation is also discussed. Finally by using Liao’s Homotopy analysis Method (LHAM), we found that the approximate analytical solution of Asthmatic disease in the environmental.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
Magnetic Soliton, Homotopy and Higgs Theory,
1986-04-24
OD-AL67 366 NAGETIC SOLITON ONOTOPY ND HIGGS THEORY(U) FOREIGNI n1/ 1TECHNOLOGY D V NRIGHT-PATTERSON AFD ON Y LI ET AL. UNCLSSIIED24 APR 86 FTD-ID...MAGNETIC SOLITON, HOMOTOPY AND HIGGS THEORY by Li Yuanjie and Lei Shizu *. . * . .%..**% . . .-..C./ ~~~Approved for public release; -," Distribution...HOMOTOPY AND HIGGS THEORY By: Li Yuanjie and Lei Shizu English pages: 9 Source: Huazhong Gongxueyuan Xuebao, Vol. 11, Nr. 6, 1983, pp. 65-70 Country of
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
NASA Astrophysics Data System (ADS)
Melikhov, Sergey A.
2009-06-01
Steenrod homotopy theory is a natural framework for doing algebraic topology on general spaces in terms of algebraic topology of polyhedra; or from a different viewpoint, it studies the topology of the \\lim^1 functor (for inverse sequences of groups). This paper is primarily concerned with the case of compacta, in which Steenrod homotopy coincides with strong shape. An attempt is made to simplify the foundations of the theory and to clarify and improve some of its major results. With geometric tools such as Milnor's telescope compactification, comanifolds (=mock bundles), and the Pontryagin-Thom construction, new simple proofs are obtained for results by Barratt-Milnor, Geoghegan-Krasinkiewicz, Dydak, Dydak-Segal, Krasinkiewicz-Minc, Cathey, Mittag-Leffler-Bourbaki, Fox, Eda-Kawamura, Edwards-Geoghegan, Jussila, and for three unpublished results by Shchepin. An error in Lisitsa's proof of the `Hurewicz theorem in Steenrod homotopy' is corrected. It is shown that over compacta, R.H. Fox's overlayings are equivalent to I.M. James' uniform covering maps. Other results include: \\bullet A morphism between inverse sequences of countable (possibly non-Abelian) groups that induces isomorphisms on \\lim and \\lim^1 is invertible in the pro-category. This implies the `Whitehead theorem in Steenrod homotopy', thereby answering two questions of Koyama. \\bullet If X is an LC_{n-1}-compactum, n\\ge 1, then its n-dimensional Steenrod homotopy classes are representable by maps S^n\\to\
Galois groups of Schubert problems via homotopy computation
NASA Astrophysics Data System (ADS)
Leykin, Anton; Sottile, Frank
2009-09-01
Numerical homotopy continuation of solutions to polynomial equations is the foundation for numerical algebraic geometry, whose development has been driven by applications of mathematics. We use numerical homotopy continuation to investigate the problem in pure mathematics of determining Galois groups in the Schubert calculus. For example, we show by direct computation that the Galois group of the Schubert problem of 3-planes in mathbb{C}^8 meeting 15 fixed 5-planes non-trivially is the full symmetric group S_{6006} .
Explicit solutions of a gravity-induced film flow along a convectively heated vertical wall.
Raees, Ammarah; Xu, Hang
2013-01-01
The gravity-driven film flow has been analyzed along a vertical wall subjected to a convective boundary condition. The Boussinesq approximation is applied to simplify the buoyancy term, and similarity transformations are used on the mathematical model of the problem under consideration, to obtain a set of coupled ordinary differential equations. Then the reduced equations are solved explicitly by using homotopy analysis method (HAM). The resulting solutions are investigated for heat transfer effects on velocity and temperature profiles.
Three dimensional radiative flow of magnetite-nanofluid with homogeneous-heterogeneous reactions
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed
2018-03-01
Present communication deals with the effects of homogeneous-heterogeneous reactions in flow of nanofluid by non-linear stretching sheet. Water based nanofluid containing magnetite nanoparticles is considered. Non-linear radiation and non-uniform heat sink/source effects are examined. Non-linear differential systems are computed by Optimal homotopy analysis method (OHAM). Convergent solutions of nonlinear systems are established. The optimal data of auxiliary variables is obtained. Impact of several non-dimensional parameters for velocity components, temperature and concentration fields are examined. Graphs are plotted for analysis of surface drag force and heat transfer rate.
Isovariant extensors and the characterization of equivariant homotopy equivalences
NASA Astrophysics Data System (ADS)
Ageev, Sergei M.
2012-10-01
We extend the well-known theorem of James-Segal to the case of an arbitrary family F of conjugacy classes of closed subgroups of a compact Lie group G: a G-map f\\colon{X}\\to{Y} of metric \\operatorname{Equiv}_{F}- {ANE}-spaces is a G-homotopy equivalence if and only if it is a weak G- F-homotopy equivalence. The proof is based on the theory of isovariant extensors, which is developed in this paper and enables us to endow F-classifying G-spaces with an additional structure.
Wheeled Pro(p)file of Batalin-Vilkovisky Formalism
NASA Astrophysics Data System (ADS)
Merkulov, S. A.
2010-05-01
Using a technique of wheeled props we establish a correspondence between the homotopy theory of unimodular Lie 1-bialgebras and the famous Batalin-Vilkovisky formalism. Solutions of the so-called quantum master equation satisfying certain boundary conditions are proven to be in 1-1 correspondence with representations of a wheeled dg prop which, on the one hand, is isomorphic to the cobar construction of the prop of unimodular Lie 1-bialgebras and, on the other hand, is quasi-isomorphic to the dg wheeled prop of unimodular Poisson structures. These results allow us to apply properadic methods for computing formulae for a homotopy transfer of a unimodular Lie 1-bialgebra structure on an arbitrary complex to the associated quantum master function on its cohomology. It is proven that in the category of quantum BV manifolds associated with the homotopy theory of unimodular Lie 1-bialgebras quasi-isomorphisms are equivalence relations. It is shown that Losev-Mnev’s BF theory for unimodular Lie algebras can be naturally extended to the case of unimodular Lie 1-bialgebras (and, eventually, to the case of unimodular Poisson structures). Using a finite-dimensional version of the Batalin-Vilkovisky quantization formalism it is rigorously proven that the Feynman integrals computing the effective action of this new BF theory describe precisely homotopy transfer formulae obtained within the wheeled properadic approach to the quantum master equation. Quantum corrections (which are present in our BF model to all orders of the Planck constant) correspond precisely to what are often called “higher Massey products” in the homological algebra.
NASA Astrophysics Data System (ADS)
Abou-zeid, Mohamed Y.; Mohamed, Mona A. A.
2017-09-01
This article is an analytic discussion for the motion of power-law nanofluid with heat transfer under the effect of viscous dissipation, radiation, and internal heat generation. The governing equations are discussed under the assumptions of long wavelength and low Reynolds number. The solutions for temperature and nanoparticle profiles are obtained by using homotopy perturbation method. Results for the behaviours of the axial velocity, temperature, and nanoparticles as well as the skin friction coefficient, reduced Nusselt number, and Sherwood number with other physical parameters are obtained graphically and analytically. It is found that as the power-law exponent increases, both the axial velocity and temperature increase, whereas nanoparticles decreases. These results may have applicable importance in the research discussions of nanofluid flow in channels with small diameters under the effect of different temperature distributions.
Two phase modeling of nanofluid flow in existence of melting heat transfer by means of HAM
NASA Astrophysics Data System (ADS)
Sheikholeslami, M.; Jafaryar, M.; Bateni, K.; Ganji, D. D.
2018-02-01
In this article, Buongiorno Model is applied for investigation of nanofluid flow over a stretching plate in existence of magnetic field. Radiation and Melting heat transfer are taken into account. Homotopy analysis method (HAM) is selected to solve ODEs which are obtained from similarity transformation. Roles of Brownian motion, thermophoretic parameter, Hartmann number, porosity parameter, Melting parameter and Eckert number are presented graphically. Results indicate that nanofluid velocity and concentration enhance with rise of melting parameter. Nusselt number reduces with increase of porosity and melting parameters.
Constructing analytic solutions on the Tricomi equation
NASA Astrophysics Data System (ADS)
Ghiasi, Emran Khoshrouye; Saleh, Reza
2018-04-01
In this paper, homotopy analysis method (HAM) and variational iteration method (VIM) are utilized to derive the approximate solutions of the Tricomi equation. Afterwards, the HAM is optimized to accelerate the convergence of the series solution by minimizing its square residual error at any order of the approximation. It is found that effect of the optimal values of auxiliary parameter on the convergence of the series solution is not negligible. Furthermore, the present results are found to agree well with those obtained through a closed-form equation available in the literature. To conclude, it is seen that the two are effective to achieve the solution of the partial differential equations.
On the homotopy equivalence of simple AI-algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristov, O Yu
1999-02-28
Let A and B be simple unital AI-algebras (an AI-algebra is an inductive limit of C*-algebras of the form BigOplus{sub i}{sup k}C([0,1],M{sub N{sub i}}). It is proved that two arbitrary unital homomorphisms from A into B such that the corresponding maps K{sub 0}A{yields}K{sub 0}B coincide are homotopic. Necessary and sufficient conditions on the Elliott invariant for A and B to be homotopy equivalent are indicated. Moreover, two algebras in the above class having the same K-theory but not homotopy equivalent are constructed. A theorem on the homotopy of approximately unitarily equivalent homomorphisms between AI-algebras is used in the proof, whichmore » is deduced in its turn from a generalization to the case of AI-algebras of a theorem of Manuilov stating that a unitary matrix almost commuting with a self-adjoint matrix h can be joined to 1 by a continuous path consisting of unitary matrices almost commuting with h.« less
Combined control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.
1989-01-01
An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.
Model-based morphological segmentation and labeling of coronary angiograms.
Haris, K; Efstratiadis, S N; Maglaveras, N; Pappas, C; Gourassas, J; Louridas, G
1999-10-01
A method for extraction and labeling of the coronary arterial tree (CAT) using minimal user supervision in single-view angiograms is proposed. The CAT structural description (skeleton and borders) is produced, along with quantitative information for the artery dimensions and assignment of coded labels, based on a given coronary artery model represented by a graph. The stages of the method are: 1) CAT tracking and detection; 2) artery skeleton and border estimation; 3) feature graph creation; and iv) artery labeling by graph matching. The approximate CAT centerline and borders are extracted by recursive tracking based on circular template analysis. The accurate skeleton and borders of each CAT segment are computed, based on morphological homotopy modification and watershed transform. The approximate centerline and borders are used for constructing the artery segment enclosing area (ASEA), where the defined skeleton and border curves are considered as markers. Using the marked ASEA, an artery gradient image is constructed where all the ASEA pixels (except the skeleton ones) are assigned the gradient magnitude of the original image. The artery gradient image markers are imposed as its unique regional minima by the homotopy modification method, the watershed transform is used for extracting the artery segment borders, and the feature graph is updated. Finally, given the created feature graph and the known model graph, a graph matching algorithm assigns the appropriate labels to the extracted CAT using weighted maximal cliques on the association graph corresponding to the two given graphs. Experimental results using clinical digitized coronary angiograms are presented.
Soley, Micheline B; Markmann, Andreas; Batista, Victor S
2018-06-12
We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.
Solutions of the epidemic of EIAV infection by HPM
NASA Astrophysics Data System (ADS)
Balamuralitharan, S.; Geethamalini, S.
2018-04-01
In this article, Homotopy Perturbation Method (HPM) is to process of estimate to the arrangements to a model for Equine Infectious Anemia Virus (EIAV) disease. This technique allows a direct scheme for solving the problem. MATLAB is operated to complete the computations. Graphical results are displayed and discussed quantitatively and simplicity of the method.
NASA Technical Reports Server (NTRS)
Smith, R. L.; Huang, C.
1986-01-01
A recent mathematical technique for solving systems of equations is applied in a very general way to the orbit determination problem. The study of this technique, the homotopy continuation method, was motivated by the possible need to perform early orbit determination with the Tracking and Data Relay Satellite System (TDRSS), using range and Doppler tracking alone. Basically, a set of six tracking observations is continuously transformed from a set with known solution to the given set of observations with unknown solutions, and the corresponding orbit state vector is followed from the a priori estimate to the solutions. A numerical algorithm for following the state vector is developed and described in detail. Numerical examples using both real and simulated TDRSS tracking are given. A prototype early orbit determination algorithm for possible use in TDRSS orbit operations was extensively tested, and the results are described. Preliminary studies of two extensions of the method are discussed: generalization to a least-squares formulation and generalization to an exhaustive global method.
NASA Astrophysics Data System (ADS)
Mohmand, Muhammad Ismail; Mamat, Mustafa Bin; Shah, Qayyum
2017-07-01
This article deals with the time dependent analysis of thermally conducting and Magneto-hydrodynamic (MHD) liquid film flow of a fourth order fluid past a vertical and vibratory plate. In this article have been developed for higher order complex nature fluids. The governing-equations have been modeled in the terms of nonlinear partial differential equations with the help of physical boundary circumstances. Two different analytical approaches i.e. Adomian decomposition method (ADM) and the optimal homotopy asymptotic method (OHAM), have been used for discoveryof the series clarification of the problems. Solutions obtained via two diversemethods have been compared using the graphs, tables and found an excellent contract. Variants of the embedded flow parameters in the solution have been analysed through the graphical diagrams.
A Choice Reaction Time Index of Callosal Anatomical Homotopy
ERIC Educational Resources Information Center
Desjardins, Sameul; Braun, Claude M. J.; Achim, Andre; Roberge, Carl
2009-01-01
Tachistoscopically presented bilateral stimulus pairs not parallel to the meridian produced significantly longer RTs on a task requiring discrimination of shapes (Go/no-Go) than pairs emplaced symmetrically on each side of the meridian in Desjardins and Braun [Desjardins, S., & Braun, C. M. J. (2006). Homotopy and heterotopy and the bilateral…
NASA Astrophysics Data System (ADS)
Sheikholeslami, M.; Ganji, D. D.
2017-12-01
In this paper, semi analytical approach is applied to investigate nanofluid Marangoni convection in presence of magnetic field. Koo-Kleinstreuer-Li model is taken into account to simulate nanofluid properties. Homotopy analysis method is utilized to solve the final ordinary equations which are obtained from similarity transformation. Roles of Hartmann number and nanofluid volume fraction are presented graphically. Results show that temperature augments with rise of nanofluid volume fraction. Impact of nanofluid volume fraction on normal velocity is more than tangential velocity. Temperature gradient enhances with rise of magnetic number.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ahmad, Salman; Khan, M. Ijaz; Alsaedi, A.
2018-05-01
This article addresses flow of third grade nanofluid due to stretchable rotating disk. Mass and heat transports are analyzed through thermophoresis and Brownian movement effects. Further the effects of heat generation and chemical reaction are also accounted. The obtained ODE's are tackled computationally by means of homotopy analysis method. Graphical outcomes are analyzed for the effects of different variables. The obtained results show that velocity reduces through Reynolds number and material parameters. Temperature and concentration increase with Brownian motion and these decrease by Reynolds number.
GPU-Accelerated Hybrid Algorithm for 3D Localization of Fluorescent Emitters in Dense Clusters
NASA Astrophysics Data System (ADS)
Jung, Yoon; Barsic, Anthony; Piestun, Rafael; Fakhri, Nikta
In stochastic switching-based super-resolution imaging, a random subset of fluorescent emitters are imaged and localized for each frame to construct a single high resolution image. However, the condition of non-overlapping point spread functions (PSFs) imposes constraints on experimental parameters. Recent development in post processing methods such as dictionary-based sparse support recovery using compressive sensing has shown up to an order of magnitude higher recall rate than single emitter fitting methods. However, the computational complexity of this approach scales poorly with the grid size and requires long runtime. Here, we introduce a fast and accurate compressive sensing algorithm for localizing fluorescent emitters in high density in 3D, namely sparse support recovery using Orthogonal Matching Pursuit (OMP) and L1-Homotopy algorithm for reconstructing STORM images (SOLAR STORM). SOLAR STORM combines OMP with L1-Homotopy to reduce computational complexity, which is further accelerated by parallel implementation using GPUs. This method can be used in a variety of experimental conditions for both in vitro and live cell fluorescence imaging.
Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z
2016-01-01
Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.
Development of a CAD Model Simplification Framework for Finite Element Analysis
2012-01-01
A. Senthil Kumar , and KH Lee. Automatic solid decomposition and reduction for non-manifold geometric model generation. Computer-Aided Design, 36(13...CAD/CAM: concepts, techniques, and applications. Wiley-interscience, 1995. [38] Avneesh Sud, Mark Foskey, and Dinesh Manocha. Homotopy-preserving
Stable homotopical algebra and [Gamma]-spaces
NASA Astrophysics Data System (ADS)
Schwede, Stefan
1999-03-01
In this paper we advertise the category of [Gamma]-spaces as a convenient framework for doing ‘algebra’ over ‘rings’ in stable homotopy theory. [Gamma]-spaces were introduced by Segal [Se] who showed that they give rise to a homotopy category equivalent to the usual homotopy category of connective (i.e. ([minus sign]1)-connected) spectra. Bousfield and Friedlander [BF] later provided model category structures for [Gamma]-spaces. The study of ‘rings, modules and algebras’ based on [Gamma]-spaces became possible when Lydakis [Ly] introduced a symmetric monoidal smash product with good homotopical properties. Here we develop model category structures for modules and algebras, set up (derived) smash products and associated spectral sequences and compare simplicial modules and algebras to their Eilenberg-MacLane spectra counterparts.
NASA Astrophysics Data System (ADS)
Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.
2018-03-01
In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.
Khan, Muhammad Altaf; Siddiqui, Nasir; Ullah, Murad; Shah, Qayyum
2018-01-01
Wire coating process is a continuous extrusion process for primary insulation of conducting wires with molten polymers for mechanical strength and protection in aggressive environments. In the present study, radiative melt polymer satisfying third grade fluid model is used for wire coating process. The effect of magnetic parameter, thermal radiation parameter and temperature dependent viscosity on wire coating analysis has been investigated. Reynolds model and Vogel’s models have been incorporated for variable viscosity. The governing equations characterizing the flow and heat transfer phenomena are solved analytically by utilizing homotopy analysis method (HAM). The computed results are also verified by ND-Solve method (Numerical technique) and Adomian Decomposition Method (ADM). The effect of pertinent parameters is shown graphically. In addition, the instability of the flow in the flows of the wall of the extrusion die is well marked in the case of the Vogel model as pointed by Nhan-Phan-Thien. PMID:29596448
Hayat, Tasawar; Asad, Sadia; Mustafa, Meraj; Alsaedi, Ahmed
2014-01-01
This study investigates the unsteady flow of Powell-Eyring fluid past an inclined stretching sheet. Unsteadiness in the flow is due to the time-dependence of the stretching velocity and wall temperature. Mathematical analysis is performed in the presence of thermal radiation and non-uniform heat source/sink. The relevant boundary layer equations are reduced into self-similar forms by suitable transformations. The analytic solutions are constructed in a series form by homotopy analysis method (HAM). The convergence interval of the auxiliary parameter is obtained. Graphical results displaying the influence of interesting parameters are given. Numerical values of skin friction coefficient and local Nusselt number are computed and analyzed.
NASA Astrophysics Data System (ADS)
Ramzan, M.; Bilal, M.; Kanwal, Shamsa; Chung, Jae Dong
2017-06-01
Present analysis discusses the boundary layer flow of Eyring Powell nanofluid past a constantly moving surface under the influence of nonlinear thermal radiation. Heat and mass transfer mechanisms are examined under the physically suitable convective boundary condition. Effects of variable thermal conductivity and chemical reaction are also considered. Series solutions of all involved distributions using Homotopy Analysis method (HAM) are obtained. Impacts of dominating embedded flow parameters are discussed through graphical illustrations. It is observed that thermal radiation parameter shows increasing tendency in relation to temperature profile. However, chemical reaction parameter exhibits decreasing behavior versus concentration distribution. Supported by the World Class 300 Project (No. S2367878) of the SMBA (Korea)
Numerical Polynomial Homotopy Continuation Method and String Vacua
Mehta, Dhagash
2011-01-01
Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less
Bioconvection in Second Grade Nanofluid Flow Containing Nanoparticles and Gyrotactic Microorganisms
NASA Astrophysics Data System (ADS)
Khan, Noor Saeed
2018-04-01
The bioconvection in steady second grade nanofluid thin film flow containing nanoparticles and gyrotactic microorganisms is considered using passively controlled nanofluid model boundary conditions. A real-life system evolves under the flow and various taxis. The study is initially proposed in the context of gyrotactic system, which is used as a key element for the description of complex bioconvection patterns and dynamics in such systems. The governing partial differential equations are transformed into a system of ordinary ones through the similarity variables and solved analytically via homotopy analysis method (HAM). The solution is expressed through graphs and illustrated which show the influences of all the parameters.
Metachronal wave analysis for non-Newtonian fluid under thermophoresis and Brownian motion effects
NASA Astrophysics Data System (ADS)
Shaheen, A.; Nadeem, S.
This paper analyse the mathematical model of ciliary motion in an annulus. The effect of convective heat transfer and nanoparticle are taken into account. The governing equations of Jeffrey six-constant fluid along with heat and nanoparticle are modelled and then simplified by using long wavelength and low Reynolds number assumptions. The reduced equations are solved with the help of homotopy perturbation method. The obtained expressions for the velocity, temperature and nanoparticles concentration profiles are plotted and the impact of various physical parameters are investigated for different peristaltic waves. Streamlines has also been plotted at the last part of the paper.
Bioconvection in Second Grade Nanofluid Flow Containing Nanoparticles and Gyrotactic Microorganisms
NASA Astrophysics Data System (ADS)
Khan, Noor Saeed
2018-06-01
The bioconvection in steady second grade nanofluid thin film flow containing nanoparticles and gyrotactic microorganisms is considered using passively controlled nanofluid model boundary conditions. A real-life system evolves under the flow and various taxis. The study is initially proposed in the context of gyrotactic system, which is used as a key element for the description of complex bioconvection patterns and dynamics in such systems. The governing partial differential equations are transformed into a system of ordinary ones through the similarity variables and solved analytically via homotopy analysis method (HAM). The solution is expressed through graphs and illustrated which show the influences of all the parameters.
Flow of nanofluid by nonlinear stretching velocity
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
Main objective in this article is to model and analyze the nanofluid flow induced by curved surface with nonlinear stretching velocity. Nanofluid comprises water and silver. Governing problem is solved by using homotopy analysis method (HAM). Induced magnetic field for low magnetic Reynolds number is not entertained. Development of convergent series solutions for velocity and skin friction coefficient is successfully made. Pressure in the boundary layer flow by curved stretching surface cannot be ignored. It is found that magnitude of power-law index parameter increases for pressure distibutions. Magnitude of radius of curvature reduces for pressure field while opposite trend can be observed for velocity.
Constraint-based semi-autonomy for unmanned ground vehicles using local sensing
NASA Astrophysics Data System (ADS)
Anderson, Sterling J.; Karumanchi, Sisir B.; Johnson, Bryan; Perlin, Victor; Rohde, Mitchell; Iagnemma, Karl
2012-06-01
Teleoperated vehicles are playing an increasingly important role in a variety of military functions. While advantageous in many respects over their manned counterparts, these vehicles also pose unique challenges when it comes to safely avoiding obstacles. Not only must operators cope with difficulties inherent to the manned driving task, but they must also perform many of the same functions with a restricted field of view, limited depth perception, potentially disorienting camera viewpoints, and significant time delays. In this work, a constraint-based method for enhancing operator performance by seamlessly coordinating human and controller commands is presented. This method uses onboard LIDAR sensing to identify environmental hazards, designs a collision-free path homotopy traversing that environment, and coordinates the control commands of a driver and an onboard controller to ensure that the vehicle trajectory remains within a safe homotopy. This system's performance is demonstrated via off-road teleoperation of a Kawasaki Mule in an open field among obstacles. In these tests, the system safely avoids collisions and maintains vehicle stability even in the presence of "routine" operator error, loss of operator attention, and complete loss of communications.
NASA Astrophysics Data System (ADS)
Rusyaman, E.; Parmikanti, K.; Chaerani, D.; Asefan; Irianingsih, I.
2018-03-01
One of the application of fractional ordinary differential equation is related to the viscoelasticity, i.e., a correlation between the viscosity of fluids and the elasticity of solids. If the solution function develops into function with two or more variables, then its differential equation must be changed into fractional partial differential equation. As the preliminary study for two variables viscoelasticity problem, this paper discusses about convergence analysis of function sequence which is the solution of the homogenous fractional partial differential equation. The method used to solve the problem is Homotopy Analysis Method. The results show that if given two real number sequences (αn) and (βn) which converge to α and β respectively, then the solution function sequences of fractional partial differential equation with order (αn, βn) will also converge to the solution function of fractional partial differential equation with order (α, β).
ER = EPR and non-perturbative action integrals for quantum gravity
NASA Astrophysics Data System (ADS)
Alsaleh, Salwa; Alasfar, Lina
In this paper, we construct and calculate non-perturbative path integrals in a multiply-connected spacetime. This is done by summing over homotopy classes of paths. The topology of the spacetime is defined by Einstein-Rosen bridges (ERB) forming from the entanglement of quantum foam described by virtual black holes. As these “bubbles” are entangled, they are connected by Planckian ERBs because of the ER = EPR conjecture. Hence, the spacetime will possess a large first Betti number B1. For any compact 2-surface in the spacetime, the topology (in particular the homotopy) of that surface is non-trivial due to the large number of Planckian ERBs that define homotopy through this surface. The quantization of spacetime with this topology — along with the proper choice of the 2-surfaces — is conjectured to allow non-perturbative path integrals of quantum gravity theory over the spacetime manifold.
Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen
2015-01-01
In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential-difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit.
Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen
2015-01-01
In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential- difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit. PMID:25874457
GHM method for obtaining rationalsolutions of nonlinear differential equations.
Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo
2015-01-01
In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.
Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis
NASA Astrophysics Data System (ADS)
Awais, M.; Saleem, S.; Hayat, T.; Irum, S.
2016-12-01
This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.
NASA Astrophysics Data System (ADS)
Khan, Imad; Fatima, Sumreen; Malik, M. Y.; Salahuddin, T.
2018-03-01
This paper explores the theoretical study of the steady incompressible two dimensional MHD boundary layer flow of Eyring-Powell nanofluid over an inclined surface. The fluid is considered to be electrically conducting and the viscosity of the fluid is assumed to be varying exponentially. The governing partial differential equations (PDE's) are reduced into ordinary differential equations (ODE's) by applying similarity approach. The resulting ordinary differential equations are solved successfully by using Homotopy analysis method. The impact of pertinent parameters on velocity, concentration and temperature profiles are examined through graphs and tables. Also coefficient of skin friction, Sherwood and Nusselt numbers are illustrated in tabular and graphical form.
Approximate Solutions for Flow with a Stretching Boundary due to Partial Slip
Filobello-Nino, U.; Vazquez-Leal, H.; Sarmiento-Reyes, A.; Benhammouda, B.; Jimenez-Fernandez, V. M.; Pereyra-Diaz, D.; Perez-Sesma, A.; Cervantes-Perez, J.; Huerta-Chua, J.; Sanchez-Orea, J.; Contreras-Hernandez, A. D.
2014-01-01
The homotopy perturbation method (HPM) is coupled with versions of Laplace-Padé and Padé methods to provide an approximate solution to the nonlinear differential equation that describes the behaviour of a flow with a stretching flat boundary due to partial slip. Comparing results between approximate and numerical solutions, we concluded that our results are capable of providing an accurate solution and are extremely efficient. PMID:27433526
On HPM approximation for the perihelion precession angle in general relativity
NASA Astrophysics Data System (ADS)
Shchigolev, Victor; Bezbatko, Dmitrii
2017-03-01
In this paper, the homotopy perturbation method (HPM) is applied for calculating the perihelion precession angle of planetary orbits in General Relativity. The HPM is quite efficient and is practically well suited for use in many astrophysical and cosmological problems. For our purpose, we applied HPM to the approximate solutions for the orbits in order to calculate the perihelion shift. On the basis of the main idea of HPM, we construct the appropriate homotopy that leads to the problem of solving the set of linear algebraic equations. As a result, we obtain a simple formula for the angle of precession avoiding any restrictions on the smallness of physical parameters. First of all, we consider the simple examples of the Schwarzschild metric and the Reissner - Nordström spacetime of a charged star for which the approximate geodesics solutions are known. Furthermore, the implementation of HPM has allowed us to readily obtain the precession angle for the orbits in the gravitational field of Kiselev black hole.
2012-09-03
prac- tice to solve these initial value problems. Additionally, the predictor / corrector methods are combined with adaptive stepsize and adaptive ...for implementing a numerical path tracking algorithm is to decide which predictor / corrector method to employ, how large to take the step ∆t, and what...the endgame algorithm . Output: A steady state solution Set ǫ = 1 while ǫ >= ǫend do set the stepsize ∆ǫ by using adaptive stepsize control algorithm
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
NASA Astrophysics Data System (ADS)
Jacak, Janusz E.
2018-01-01
We demonstrate an original development of path-integral quantization in the case of a multiply connected configuration space of indistinguishable charged particles on a 2D manifold and exposed to a strong perpendicular magnetic field. The system occurs to be exceptionally homotopy-rich and the structure of the homotopy essentially depends on the magnetic field strength resulting in multiloop trajectories at specific conditions. We have proved, by a generalization of the Bohr-Sommerfeld quantization rule, that the size of a magnetic field flux quantum grows for multiloop orbits like (2 k +1 ) h/c with the number of loops k . Utilizing this property for electrons on the 2D substrate jellium, we have derived upon the path integration a complete FQHE hierarchy in excellent consistence with experiments. The path integral has been next developed to a sum over configurations, displaying various patterns of trajectory homotopies (topological configurations), which in the nonstationary case of quantum kinetics, reproduces some unclear formerly details in the longitudinal resistivity observed in experiments.
Using the Homotopy Method to Find Periodic Solutions of Forced Nonlinear Differential Equations
ERIC Educational Resources Information Center
Fay, Temple H.; Lott, P. Aaron
2002-01-01
This paper discusses a result of Li and Shen which proves the existence of a unique periodic solution for the differential equation x[dots above] + kx[dot above] + g(x,t) = [epsilon](t) where k is a constant; g is continuous, continuously differentiable with respect to x , and is periodic of period P in the variable t; [epsilon](t) is continuous…
Type II superstring field theory: geometric approach and operadic description
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Münster, Korbinian
2013-04-01
We outline the construction of type II superstring field theory leading to a geometric and algebraic BV master equation, analogous to Zwiebach's construction for the bosonic string. The construction uses the small Hilbert space. Elementary vertices of the non-polynomial action are described with the help of a properly formulated minimal area problem. They give rise to an infinite tower of superstring field products defining a {N} = 1 generalization of a loop homotopy Lie algebra, the genus zero part generalizing a homotopy Lie algebra. Finally, we give an operadic interpretation of the construction.
Falkner-Skan Boundary Layer Flow of a Sisko Fluid
NASA Astrophysics Data System (ADS)
Khan, Masood; Shahzad, Azeem
2012-09-01
In this paper, we investigate the steady boundary layer flow of a non-Newtonian fluid, represented by a Sisko fluid, over a wedge in a moving fluid. The equations of motion are derived for boundary layer flow of an incompressible Sisko fluid using appropriate similarity variables. The governing equations are reduced to a single third-order highly nonlinear ordinary differential equation in the dimensionless stream function, which is then solved analytically using the homotopy analysis method. Some important parameters have been discussed by this study, which include the power law index n, the material parameter A, the wedge shape factor b, and the skin friction coefficient Cf. A comprehensive study is made between the results of the Sisko and the power-law fluids.
NASA Astrophysics Data System (ADS)
Naseem, Anum; Shafiq, Anum; Zhao, Lifeng; Farooq, M. U.
2018-06-01
This article addresses third grade nanofluidic flow instigated by riga plate and Cattaneo-Christov theory is adopted to investigate thermal and mass diffusions with the incorporation of newly eminent zero nanoparticles mass flux condition. The governing system of equations is nondimensionalized through relevant similarity transformations and significatory findings are attained by using optimal homotopy analysis method. The behaviors of affecting parameters for velocity, temperature and concentration profiles are depicted graphically and also verified through three dimensional patterns for some parameters. Values of skin friction coefficient and Nusselt number with the apposite discussion have been recorded. The current results reveal that temperature and concentration profiles experience decline when thermal and concentration relaxation parameters are augmented respectively.
Carbon nanotubes significance in Darcy-Forchheimer flow
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Rafique, Kiran; Muhammad, Taseer; Alsaedi, Ahmed; Ayub, Muhammad
2018-03-01
The present article examines Darcy-Forchheimer flow of water-based carbon nanotubes. Flow is induced due to a curved stretchable surface. Heat transfer mechanism is analyzed in presence of convective heating process. Xue model of nanofluid is employed to study the characteristics of both single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs). Results for both single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) are achieved and compared. Appropriate transformations correspond to strong nonlinear ordinary differential system. Optimal homotopy analysis method (OHAM) is used for the solution development of the resulting system. The contributions of different sundry variables on the velocity and temperature are studied. Further the skin friction coefficient and local Nusselt number are analyzed graphically for both SWCNTs and MWCNTs cases.
Incomplete Gröbner basis as a preconditioner for polynomial systems
NASA Astrophysics Data System (ADS)
Sun, Yang; Tao, Yu-Hui; Bai, Feng-Shan
2009-04-01
Precondition plays a critical role in the numerical methods for large and sparse linear systems. It is also true for nonlinear algebraic systems. In this paper incomplete Gröbner basis (IGB) is proposed as a preconditioner of homotopy methods for polynomial systems of equations, which transforms a deficient system into a system with the same finite solutions, but smaller degree. The reduced system can thus be solved faster. Numerical results show the efficiency of the preconditioner.
Chern-Simons-Antoniadis-Savvidy forms and standard supergravity
NASA Astrophysics Data System (ADS)
Izaurieta, F.; Salgado, P.; Salgado, S.
2017-04-01
In the context of the so called the Chern-Simons-Antoniadis-Savvidy (ChSAS) forms, we use the methods for FDA decomposition in 1-forms to construct a four-dimensional ChSAS supergravity action for the Maxwell superalgebra. On the another hand, we use the Extended Cartan Homotopy Formula to find a method that allows the separation of the ChSAS action into bulk and boundary contributions and permits the splitting of the bulk Lagrangian into pieces that reflect the particular subspace structure of the gauge algebra.
On the singular perturbations for fractional differential equation.
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report describes the current state of development of methods for calculating optimal orbital transfers with large numbers of burns. Reported on first is the homotopy-motivated and so-called direction correction method. So far this method has been partially tested with one solver; the final step has yet to be implemented. Second is the patched transfer method. This method is rooted in some simplifying approximations made on the original optimal control problem. The transfer is broken up into single-burn segments, each single-burn solved as a predictor step and the whole problem then solved with a corrector step.
Two and three dimensional grid generation by an algebraic homotopy procedure
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1990-01-01
An algebraic method for generating two- and three-dimensional grid systems for aerospace vehicles is presented. The method is based on algebraic procedures derived from homotopic relations for blending between inner and outer boundaries of any given configuration. Stable properties of homotopic maps have been exploited to provide near-orthogonality and specified constant spacing at the inner boundary. The method has been successfully applied to analytically generated blended wing-body configurations as well as discretely defined geometries such as the High-Speed Civil Transport Aircraft. Grid examples representative of the capabilities of the method are presented.
Modelling the aggregation process of cellular slime mold by the chemical attraction.
Atangana, Abdon; Vermeulen, P D
2014-01-01
We put into exercise a comparatively innovative analytical modus operandi, the homotopy decomposition method (HDM), for solving a system of nonlinear partial differential equations arising in an attractor one-dimensional Keller-Segel dynamics system. Numerical solutions are given and some properties show evidence of biologically practical reliance on the parameter values. The reliability of HDM and the reduction in computations give HDM a wider applicability.
Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design
NASA Technical Reports Server (NTRS)
Whorton, Mark; Buschek, Harald; Calise, Anthony J.
1996-01-01
Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.
NASA Astrophysics Data System (ADS)
Ahmad, S.; Farooq, M.; Javed, M.; Anjum, Aisha
2018-03-01
A current analysis is carried out to study theoretically the mixed convection characteristics in squeezing flow of Sutterby fluid in squeezed channel. The constitutive equation of Sutterby model is utilized to characterize the rheology of squeezing phenomenon. Flow characteristics are explored with dual stratification. In flowing fluid which contains heat and mass transport, the first order chemical reaction and radiative heat flux affect the transport phenomenon. The systems of non-linear governing equations have been modulating which then solved by mean of convergent approach (Homotopy Analysis Method). The graphs are reported and illustrated for emerging parameters. Through graphical explanations, drag force, rate of heat and mass transport are conversed for different pertinent parameters. It is found that heat and mass transport rate decays with dominant double stratified parameters and chemical reaction parameter. The present two-dimensional examination is applicable in some of the engineering processes and industrial fluid mechanics.
Rubab, Khansa; Mustafa, M
2016-01-01
This letter investigates the MHD three-dimensional flow of upper-convected Maxwell (UCM) fluid over a bi-directional stretching surface by considering the Cattaneo-Christov heat flux model. This model has tendency to capture the characteristics of thermal relaxation time. The governing partial differential equations even after employing the boundary layer approximations are non linear. Accurate analytic solutions for velocity and temperature distributions are computed through well-known homotopy analysis method (HAM). It is noticed that velocity decreases and temperature rises when stronger magnetic field strength is accounted. Penetration depth of temperature is a decreasing function of thermal relaxation time. The analysis for classical Fourier heat conduction law can be obtained as a special case of the present work. To our knowledge, the Cattaneo-Christov heat flux model law for three-dimensional viscoelastic flow problem is just introduced here.
NASA Astrophysics Data System (ADS)
Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.
In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
On the Singular Perturbations for Fractional Differential Equation
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357
Numerical algebraic geometry for model selection and its application to the life sciences
Gross, Elizabeth; Davis, Brent; Ho, Kenneth L.; Bates, Daniel J.
2016-01-01
Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. PMID:27733697
Three-dimensional flow of Prandtl fluid with Cattaneo-Christov double diffusion
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Aziz, Arsalan; Muhammad, Taseer; Alsaedi, Ahmed
2018-06-01
This research paper intends to investigate the 3D flow of Prandtl liquid in the existence of improved heat conduction and mass diffusion models. Flow is created by considering linearly bidirectional stretchable sheet. Thermal and concentration diffusions are considered by employing Cattaneo-Christov double diffusion models. Boundary layer approach has been used to simplify the governing PDEs. Suitable nondimensional similarity variables correspond to strong nonlinear ODEs. Optimal homotopy analysis method (OHAM) is employed for solutions development. The role of various pertinent variables on temperature and concentration are analyzed through graphs. The physical quantities such as surface drag coefficients and heat and mass transfer rates at the wall are also plotted and discussed. Our results indicate that the temperature and concentration are decreasing functions of thermal and concentration relaxation parameters respectively.
Approximate solution of space and time fractional higher order phase field equation
NASA Astrophysics Data System (ADS)
Shamseldeen, S.
2018-03-01
This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Aziz, Arsalan; Muhammad, Taseer; Alsaedi, Ahmed
2017-09-01
The present study elaborates three-dimensional flow of Williamson nanoliquid over a nonlinear stretchable surface. Fluid flow obeys Darcy-Forchheimer porous medium. A bidirectional nonlinear stretching surface generates the flow. Convective surface condition of heat transfer is taken into consideration. Further the zero nanoparticles mass flux condition is imposed at the boundary. Effects of thermophoresis and Brownian diffusion are considered. Assumption of boundary layer has been employed in the problem formulation. Convergent series solutions for the nonlinear governing system are established through the optimal homotopy analysis method (OHAM). Graphs have been sketched in order to analyze that how the velocity, temperature and concentration distributions are affected by distinct emerging flow parameters. Skin friction coefficients and local Nusselt number are also computed and discussed.
Shehzad, Sabir Ali; Alsaedi, Ahmed; Hayat, Tasawar; Alhuthali, M. Shahab
2013-01-01
This paper looks at the series solutions of three dimensional boundary layer flow. An Oldroyd-B fluid with variable thermal conductivity is considered. The flow is induced due to stretching of a surface. Analysis has been carried out in the presence of heat generation/absorption. Homotopy analysis is implemented in developing the series solutions to the governing flow and energy equations. Graphs are presented and discussed for various parameters of interest. Comparison of present study with the existing limiting solution is shown and examined. PMID:24223780
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen
1990-01-01
One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.
NASA Technical Reports Server (NTRS)
Kirschner, S. M.; Samii, M. V.; Broaddus, S. R.; Doll, C. E.
1988-01-01
The Preliminary Orbit Determination System (PODS) provides early orbit determination capability in the Trajectory Computation and Orbital Products System (TCOPS) for a Tracking and Data Relay Satellite System (TDRSS)-tracked spacecraft. PODS computes a set of orbit states from an a priori estimate and six tracking measurements, consisting of any combination of TDRSS range and Doppler tracking measurements. PODS uses the homotopy continuation method to solve a set of nonlinear equations, and it is particularly effective for the case when the a priori estimate is not well known. Since range and Doppler measurements produce multiple states in PODS, a screening technique selects the desired state. PODS is executed in the TCOPS environment and can directly access all operational data sets. At the completion of the preliminary orbit determination, the PODS-generated state, along with additional tracking measurements, can be directly input to the differential correction (DC) process to generate an improved state. To validate the computational and operational capabilities of PODS, tests were performed using simulated TDRSS tracking measurements for the Cosmic Background Explorer (COBE) satellite and using real TDRSS measurements for the Earth Radiation Budget Satellite (ERBS) and the Solar Mesosphere Explorer (SME) spacecraft. The effects of various measurement combinations, varying arc lengths, and levels of degradation of the a priori state vector on the PODS solutions were considered.
NASA Astrophysics Data System (ADS)
Idrees, M.; Rehman, Sajid; Shah, Rehan Ali; Ullah, M.; Abbas, Tariq
2018-03-01
An analysis is performed for the fluid dynamics incorporating the variation of viscosity and thermal conductivity on an unsteady two-dimensional free surface flow of a viscous incompressible conducting fluid taking into account the effect of a magnetic field. Surface tension quadratically vary with temperature while fluid viscosity and thermal conductivity are assumed to vary as a linear function of temperature. The boundary layer partial differential equations in cartesian coordinates are transformed into a system of nonlinear ordinary differential equations (ODEs) by similarity transformation. The developed nonlinear equations are solved analytically by Homotopy Analysis Method (HAM) while numerically by using the shooting method. The Effects of natural parameters such as the variable viscosity parameter A, variable thermal conductivity parameter N, Hartmann number Ma, film Thickness, unsteadiness parameter S, Thermocapillary number M and Prandtl number Pr on the velocity and temperature profiles are investigated. The results for the surface skin friction coefficient f″ (0) , Nusselt number (heat flux) -θ‧ (0) and free surface temperature θ (1) are presented graphically and in tabular form.
Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network
NASA Astrophysics Data System (ADS)
Fallahpour, R.; Chakouvari, S.; Askari, H.
2015-03-01
In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.
NASA Astrophysics Data System (ADS)
Saad, K. M.
2018-03-01
In this work we extend the standard model for a cubic isothermal auto-catalytic chemical system (CIACS) to a new model of a fractional cubic isothermal auto-catalytic chemical system (FCIACS) based on Caputo (C), Caputo-Fabrizio (CF) and Atangana-Baleanu in the Liouville-Caputo sense (ABC) fractional time derivatives, respectively. We present approximate solutions for these extended models using the q -homotopy analysis transform method ( q -HATM). We solve the FCIACS with the C derivative and compare our results with those obtained using the CF and ABC derivatives. The ranges of convergence of the solutions are found and the optimal values of h , the auxiliary parameter, are derived. Finally, these solutions are compared with numerical solutions of the various models obtained using finite differences and excellent agreement is found.
Analytical study of Cattaneo-Christov heat flux model for a boundary layer flow of Oldroyd-B fluid
NASA Astrophysics Data System (ADS)
F, M. Abbasi; M, Mustafa; S, A. Shehzad; M, S. Alhuthali; T, Hayat
2016-01-01
We investigate the Cattaneo-Christov heat flux model for a two-dimensional laminar boundary layer flow of an incompressible Oldroyd-B fluid over a linearly stretching sheet. Mathematical formulation of the boundary layer problems is given. The nonlinear partial differential equations are converted into the ordinary differential equations using similarity transformations. The dimensionless velocity and temperature profiles are obtained through optimal homotopy analysis method (OHAM). The influences of the physical parameters on the velocity and the temperature are pointed out. The results show that the temperature and the thermal boundary layer thickness are smaller in the Cattaneo-Christov heat flux model than those in the Fourier’s law of heat conduction. Project supported by the Deanship of Scientific Research (DSR) King Abdulaziz University, Jeddah, Saudi Arabia (Grant No. 32-130-36-HiCi).
Homotopy-Theoretic Study & Atomic-Scale Observation of Vortex Domains in Hexagonal Manganites
Li, Jun; Chiang, Fu-Kuo; Chen, Zhen; Ma, Chao; Chu, Ming-Wen; Chen, Cheng-Hsuan; Tian, Huanfang; Yang, Huaixin; Li, Jianqi
2016-01-01
Essential structural properties of the non-trivial “string-wall-bounded” topological defects in hexagonal manganites are studied through homotopy group theory and spherical aberration-corrected scanning transmission electron microscopy. The appearance of a “string-wall-bounded” configuration in RMnO3 is shown to be strongly linked with the transformation of the degeneracy space. The defect core regions (~50 Å) mainly adopt the continuous U(1) symmetry of the high-temperature phase, which is essential for the formation and proliferation of vortices. Direct visualization of vortex strings at atomic scale provides insight into the mechanisms and macro-behavior of topological defects in crystalline materials. PMID:27324701
NASA Astrophysics Data System (ADS)
Sarıaydın, Selin; Yıldırım, Ahmet
2010-05-01
In this paper, we studied the solitary wave solutions of the (2+1)-dimensional Boussinesq equation utt -uxx-uyy-(u2)xx-uxxxx = 0 and the (3+1)-dimensional Kadomtsev-Petviashvili (KP) equation uxt -6ux 2 +6uuxx -uxxxx -uyy -uzz = 0. By using this method, an explicit numerical solution is calculated in the form of a convergent power series with easily computable components. To illustrate the application of this method numerical results are derived by using the calculated components of the homotopy perturbation series. The numerical solutions are compared with the known analytical solutions. Results derived from our method are shown graphically.
Ene, Remus-Daniel; Marinca, Vasile; Marinca, Bogdan
2016-01-01
Analytic approximate solutions using Optimal Homotopy Perturbation Method (OHPM) are given for steady boundary layer flow over a nonlinearly stretching wall in presence of partial slip at the boundary. The governing equations are reduced to nonlinear ordinary differential equation by means of similarity transformations. Some examples are considered and the effects of different parameters are shown. OHPM is a very efficient procedure, ensuring a very rapid convergence of the solutions after only two iterations.
Ene, Remus-Daniel; Marinca, Vasile; Marinca, Bogdan
2016-01-01
Analytic approximate solutions using Optimal Homotopy Perturbation Method (OHPM) are given for steady boundary layer flow over a nonlinearly stretching wall in presence of partial slip at the boundary. The governing equations are reduced to nonlinear ordinary differential equation by means of similarity transformations. Some examples are considered and the effects of different parameters are shown. OHPM is a very efficient procedure, ensuring a very rapid convergence of the solutions after only two iterations. PMID:27031232
Flow and Heat Transfer in Sisko Fluid with Convective Boundary Condition
Malik, Rabia; Khan, Masood; Munir, Asif; Khan, Waqar Azeem
2014-01-01
In this article, we have studied the flow and heat transfer in Sisko fluid with convective boundary condition over a non-isothermal stretching sheet. The flow is influenced by non-linearly stretching sheet in the presence of a uniform transverse magnetic field. The partial differential equations governing the problem have been reduced by similarity transformations into the ordinary differential equations. The transformed coupled ordinary differential equations are then solved analytically by using the homotopy analysis method (HAM) and numerically by the shooting method. Effects of different parameters like power-law index , magnetic parameter , stretching parameter , generalized Prandtl number Pr and generalized Biot number are presented graphically. It is found that temperature profile increases with the increasing value of and whereas it decreases for . Numerical values of the skin-friction coefficient and local Nusselt number are tabulated at various physical situations. In addition, a comparison between the HAM and exact solutions is also made as a special case and excellent agreement between results enhance a confidence in the HAM results. PMID:25285822
NASA Astrophysics Data System (ADS)
Cartier, Pierre; DeWitt-Morette, Cecile
2006-11-01
Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.
NASA Astrophysics Data System (ADS)
Cartier, Pierre; DeWitt-Morette, Cecile
2010-06-01
Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.
NASA Astrophysics Data System (ADS)
Ramzan, M.; Gul, Hina; Dong Chung, Jae
2017-11-01
A mathematical model is designed to deliberate the flow of an MHD Jeffery nanofluid past a vertically inclined stretched cylinder near a stagnation point. The flow analysis is performed in attendance of thermal radiation, mixed convection and chemical reaction. Influence of thermal and solutal stratification with slip boundary condition is also considered. Apposite transformations are engaged to convert the nonlinear partial differential equations to differential equations with high nonlinearity. Convergent series solutions of the problem are established via the renowned Homotopy Analysis Method (HAM). Graphical illustrations are plotted to depict the effects of prominent arising parameters against all involved distributions. Numerically erected tables of important physical parameters like Skin friction, Nusselt and Sherwood numbers are also give. Comparative studies (with a previously examined work) are also included to endorse our results. It is noticed that the thermal stratification parameter has diminishing effect on temperature distribution. Moreover, the velocity field is a snowballing and declining function of curvature and slip parameters respectively.
Further distinctive investigations of the Sumudu transform
NASA Astrophysics Data System (ADS)
Belgacem, Fethi Bin Muhammad; Silambarasan, Rathinavel
2017-01-01
The Sumudu transform of time function f (t) is computed by making the transform variable u of Sumudu as factor of function f (t) and then integrated against exp(-t). Being a factor in the original function f (t), becomes f (ut) preserves units and dimension. This preservation property distinguishes Sumudu from other integral transforms. With obtained definition, the related complete set of properties were derived for the Sumudu transform. Framgment of Symbolic C++ program was given for Sumudu computation as series. Also procedure in Maple was given for Sumudu computation in closed form. The Method proposed herein not depends neither on any of homotopy methods such as HPM, HAM nor any of decomposition methods such as ADM.
NASA Astrophysics Data System (ADS)
Khan, Kashif Ali; Butt, Asma Rashid; Raza, Nauman
2018-03-01
In this study, an endeavor is to observe the unsteady two-dimensional boundary layer flow with heat and mass transfer behavior of Casson fluid past a stretching sheet in presence of wall mass transfer by ignoring the effects of viscous dissipation. Chemical reaction of linear order is also invoked here. Similarity transformation have been applied to reduce the governing equations of momentum, energy and mass into non-linear ordinary differential equations; then Homotopy analysis method (HAM) is applied to solve these equations. Numerical work is done carefully with a well-known software MATHEMATICA for the examination of non-dimensional velocity, temperature, and concentration profiles, and then results are presented graphically. The skin friction (viscous drag), local Nusselt number (rate of heat transfer) and Sherwood number (rate of mass transfer) are discussed and presented in tabular form for several factors which are monitoring the flow model.
Analysis of Mathematical Modelling on Potentiometric Biosensors
Mehala, N.; Rajendran, L.
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories. PMID:25969765
Analysis of mathematical modelling on potentiometric biosensors.
Mehala, N; Rajendran, L
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.
Convective boundary conditions effect on peristaltic flow of a MHD Jeffery nanofluid
NASA Astrophysics Data System (ADS)
Kothandapani, M.; Prakash, J.
2016-03-01
This work is aimed at describing the influences of MHD, chemical reaction, thermal radiation and heat source/sink parameter on peristaltic flow of Jeffery nanofluids in a tapered asymmetric channel along with slip and convective boundary conditions. The governing equations of a nanofluid are first formulated and then simplified under long-wavelength and low-Reynolds number approaches. The equation of nanoparticles temperature and concentration is coupled; hence, homotopy perturbation method has been used to obtain the solutions of temperature and concentration of nanoparticles. Analytical solutions for axial velocity, stream function and pressure gradient have also constructed. Effects of various influential flow parameters have been pointed out through with help of the graphs. Analysis indicates that the temperature of nanofluids decreases for a given increase in heat transfer Biot number and chemical reaction parameter, but it possesses converse behavior in respect of mass transfer Biot number and heat source/sink parameter.
Free Convection Nanofluid Flow in the Stagnation-Point Region of a Three-Dimensional Body
Farooq, Umer
2014-01-01
Analytical results are presented for a steady three-dimensional free convection flow in the stagnation point region over a general curved isothermal surface placed in a nanofluid. The momentum equations in x- and y-directions, energy balance equation, and nanoparticle concentration equation are reduced to a set of four fully coupled nonlinear differential equations under appropriate similarity transformations. The well known technique optimal homotopy analysis method (OHAM) is used to obtain the exact solution explicitly, whose convergence is then checked in detail. Besides, the effects of the physical parameters, such as the Lewis number, the Brownian motion parameter, the thermophoresis parameter, and the buoyancy ratio on the profiles of velocities, temperature, and concentration, are studied and discussed. Furthermore the local skin friction coefficients in x- and y-directions, the local Nusselt number, and the local Sherwood number are examined for various values of the physical parameters. PMID:25114954
NASA Astrophysics Data System (ADS)
Saeed Butt, Adnan; Ali, Asif
2014-01-01
The present article aims to investigate the entropy effects in magnetohydrodynamic flow and heat transfer over an unsteady permeable stretching surface. The time-dependent partial differential equations are converted into non-linear ordinary differential equations by suitable similarity transformations. The solutions of these equations are computed analytically by the Homotopy Analysis Method (HAM) then solved numerically by the MATLAB built-in routine. Comparison of the obtained results is made with the existing literature under limiting cases to validate our study. The effects of unsteadiness parameter, magnetic field parameter, suction/injection parameter, Prandtl number, group parameter and Reynolds number on flow and heat transfer characteristics are checked and analysed with the aid of graphs and tables. Moreover, the effects of these parameters on entropy generation number and Bejan number are also shown graphically. It is examined that the unsteadiness and presence of magnetic field augments the entropy production.
Hayat, T.; Hussain, Zakir; Alsaedi, A.; Farooq, M.
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ—perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears. PMID:27280883
Hayat, T; Hussain, Zakir; Alsaedi, A; Farooq, M
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ-perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears.
Hpm of Estrogen Model on the Dynamics of Breast Cancer
NASA Astrophysics Data System (ADS)
Govindarajan, A.; Balamuralitharan, S.; Sundaresan, T.
2018-04-01
We enhance a deterministic mathematical model involving universal dynamics on breast cancer with immune response. This is population model so includes Normal cells class, Tumor cells, Immune cells and Estrogen. The eects regarding Estrogen are below incorporated in the model. The effects show to that amount the arrival of greater Estrogen increases the danger over growing breast cancer. Furthermore, approximate solution regarding nonlinear differential equations is arrived by Homotopy Perturbation Method (HPM). Hes HPM is good and correct technique after solve nonlinear differential equation directly. Approximate solution learnt with the support of that method is suitable same as like the actual results in accordance with this models.
Possible quantum algorithm for the Lipshitz-Sarkar-Steenrod square for Khovanov homology
NASA Astrophysics Data System (ADS)
Ospina, Juan
2013-05-01
Recently the celebrated Khovanov Homology was introduced as a target for Topological Quantum Computation given that the Khovanov Homology provides a generalization of the Jones polynomal and then it is possible to think about of a generalization of the Aharonov.-Jones-Landau algorithm. Recently, Lipshitz and Sarkar introduced a space-level refinement of Khovanov homology. which is called Khovanov Homotopy. This refinement induces a Steenrod square operation Sq2 on Khovanov homology which they describe explicitly and then some computations of Sq2 were presented. Particularly, examples of links with identical integral Khovanov homology but with distinct Khovanov homotopy types were showed. In the presente work we will introduce possible quantum algorithms for the Lipshitz- Sarkar-Steenrod square for Khovanov Homolog and their possible simulations using computer algebra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine
This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.
A Tutorial Review on Fractal Spacetime and Fractional Calculus
NASA Astrophysics Data System (ADS)
He, Ji-Huan
2014-11-01
This tutorial review of fractal-Cantorian spacetime and fractional calculus begins with Leibniz's notation for derivative without limits which can be generalized to discontinuous media like fractal derivative and q-derivative of quantum calculus. Fractal spacetime is used to elucidate some basic properties of fractal which is the foundation of fractional calculus, and El Naschie's mass-energy equation for the dark energy. The variational iteration method is used to introduce the definition of fractional derivatives. Fractal derivative is explained geometrically and q-derivative is motivated by quantum mechanics. Some effective analytical approaches to fractional differential equations, e.g., the variational iteration method, the homotopy perturbation method, the exp-function method, the fractional complex transform, and Yang-Laplace transform, are outlined and the main solution processes are given.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Benhammouda, Brahim; Vazquez-Leal, Hector
2016-01-01
This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.
NASA Astrophysics Data System (ADS)
Renteln, Paul
2013-11-01
Preface; 1. Linear algebra; 2. Multilinear algebra; 3. Differentiation on manifolds; 4. Homotopy and de Rham cohomology; 5. Elementary homology theory; 6. Integration on manifolds; 7. Vector bundles; 8. Geometric manifolds; 9. The degree of a smooth map; Appendixes; References; Index.
Formality Theorem for Hochschild Cochains via Transfer
NASA Astrophysics Data System (ADS)
Dolgushev, Vasily
2011-08-01
We construct a 2-colored operad Ger ∞ which, on the one hand, extends the operad Ger ∞ governing homotopy Gerstenhaber algebras and, on the other hand, extends the 2-colored operad governing open-closed homotopy algebras. We show that Tamarkin's Ger ∞-structure on the Hochschild cochain complex C •( A, A) of an A ∞-algebra A extends naturally to a {{Ger}^+_{infty}}-structure on the pair ( C •( A, A), A). We show that a formality quasi-isomorphism for the Hochschild cochains of the polynomial algebra can be obtained via transfer of this {{Ger}^+_{infty}}-structure to the cohomology of the pair ( C •( A, A), A). We show that {{Ger}^+_{infty}} is a sub DG operad of the first sheet E 1(SC) of the homology spectral sequence for the Fulton-MacPherson version SC of Voronov's Swiss Cheese operad. Finally, we prove that the DG operads {{Ger}^+_{infty}} and E 1(SC) are non-formal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Masood; Malik, Rabia, E-mail: rabiamalik.qau@gmail.com; Munir, Asif
In this article, the mixed convective heat transfer to Sisko fluid over a radially stretching surface in the presence of convective boundary conditions is investigated. The viscous dissipation and thermal radiation effects are also taken into account. The suitable transformations are applied to convert the governing partial differential equations into a set of nonlinear coupled ordinary differential equations. The analytical solution of the governing problem is obtained by using the homotopy analysis method (HAM). Additionally, these analytical results are compared with the numerical results obtained by the shooting technique. The obtained results for the velocity and temperature are analyzed graphicallymore » for several physical parameters for the assisting and opposing flows. It is found that the effect of buoyancy parameter is more prominent in case of the assisting flow as compared to the opposing flow. Further, in tabular form the numerical values are given for the local skin friction coefficient and local Nusselt number. A remarkable agreement is noticed by comparing the present results with the results reported in the literature as a special case.« less
Active and passive controls of Jeffrey nanofluid flow over a nonlinear stretching surface
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Aziz, Arsalan; Muhammad, Taseer; Alsaedi, Ahmed
This communication explores magnetohydrodynamic (MHD) boundary-layer flow of Jeffrey nanofluid over a nonlinear stretching surface with active and passive controls of nanoparticles. A nonlinear stretching surface generates the flow. Effects of thermophoresis and Brownian diffusion are considered. Jeffrey fluid is electrically conducted subject to non-uniform magnetic field. Low magnetic Reynolds number and boundary-layer approximations have been considered in mathematical modelling. The phenomena of impulsing the particles away from the surface in combination with non-zero mass flux condition is known as the condition of zero mass flux. Convergent series solutions for the nonlinear governing system are established through optimal homotopy analysis method (OHAM). Graphs have been sketched in order to analyze that how the temperature and concentration distributions are affected by distinct physical flow parameters. Skin friction coefficient and local Nusselt and Sherwood numbers are also computed and analyzed. Our findings show that the temperature and concentration distributions are increasing functions of Hartman number and thermophoresis parameter.
NASA Astrophysics Data System (ADS)
Khan, Noor Saeed; Gul, Taza; Khan, Muhammad Altaf; Bonyah, Ebenezer; Islam, Saeed
Mixed convection in gravity-driven non-Newtonian nanofluid films (Casson and Williamson) flow containing both nanoparticles and gyrotactic microorganisms along a convectively heated vertical surface is investigated. The actively controlled nanofluid model boundary conditions are used to explore the liquid films flow. The study exhibits an analytical approach for the non-Newtonian thin film nanofluids bioconvection based on physical mechanisms responsible for the nanoparticles and the base fluid, such as Brownian motion and thermophoresis. Both the fluids have almost the same behaviors for the effects of all the pertinent parameters except the effect of Schmidt number on the microorganism density function where the effect is opposite. Ordinary differential equations together with the boundary conditions are obtained through similarity variables from the governing equations of the problem, which are solved by HAM (Homotopy Analysis Method). The solution is expressed through graphs and illustrated which show the influences of all the parameters. The study is relevant to novel microbial fuel cell technologies combining the nanofluid with bioconvection phenomena.
Optimal solutions for the evolution of a social obesity epidemic model
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef
2017-06-01
In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
NASA Astrophysics Data System (ADS)
Senthamarai, R.; Jana Ranjani, R.
2018-04-01
In this paper, a mathematical model of an amperometric biosensor at mixed enzyme kinetics and diffusion limitation in the case of substrate inhibition has been developed. The model is based on time dependent reaction diffusion equation containing a non -linear term related to non -Michaelis - Menten kinetics of the enzymatic reaction. Solution for the concentration of the substrate has been derived for all values of parameters using the homotopy perturbation method. All the approximate analytic expressions of substrate concentration are compared with simulation results using Scilab/Matlab program. Finally, we have given a satisfactory agreement between them.
UCAV path planning in the presence of radar-guided surface-to-air missile threats
NASA Astrophysics Data System (ADS)
Zeitz, Frederick H., III
This dissertation addresses the problem of path planning for unmanned combat aerial vehicles (UCAVs) in the presence of radar-guided surface-to-air missiles (SAMs). The radars, collocated with SAM launch sites, operate within the structure of an Integrated Air Defense System (IADS) that permits communication and cooperation between individual radars. The problem is formulated in the framework of the interaction between three sub-systems: the aircraft, the IADS, and the missile. The main features of this integrated model are: The aircraft radar cross section (RCS) depends explicitly on both the aspect and bank angles; hence, the RCS and aircraft dynamics are coupled. The probabilistic nature of IADS tracking is accounted for; namely, the probability that the aircraft has been continuously tracked by the IADS depends on the aircraft RCS and range from the perspective of each radar within the IADS. Finally, the requirement to maintain tracking prior to missile launch and during missile flyout are also modeled. Based on this model, the problem of UCAV path planning is formulated as a minimax optimal control problem, with the aircraft bank angle serving as control. Necessary conditions of optimality for this minimax problem are derived. Based on these necessary conditions, properties of the optimal paths are derived. These properties are used to discretize the dynamic optimization problem into a finite-dimensional, nonlinear programming problem that can be solved numerically. Properties of the optimal paths are also used to initialize the numerical procedure. A homotopy method is proposed to solve the finite-dimensional, nonlinear programming problem, and a heuristic method is proposed to improve the discretization during the homotopy process. Based upon the properties of numerical solutions, a method is proposed for parameterizing and storing information for later recall in flight to permit rapid replanning in response to changing threats. Illustrative examples are presented that confirm the standard flying tactics of "denying range, aspect, and aim," by yielding flight paths that "weave" to avoid long exposures of aspects with large RCS.
Modular operads and the quantum open-closed homotopy algebra
NASA Astrophysics Data System (ADS)
Doubek, Martin; Jurčo, Branislav; Münster, Korbinian
2015-12-01
We verify that certain algebras appearing in string field theory are algebras over Feynman transform of modular operads which we describe explicitly. Equivalent description in terms of solutions of generalized BV master equations are explained from the operadic point of view.
Power-limited low-thrust trajectory optimization with operation point detection
NASA Astrophysics Data System (ADS)
Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng
2018-06-01
The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Ahmed, Naveed; Adnan; Khan, Umar; Mohyud-Din, Syed Tauseef; Manzoor, Raheela
2017-05-01
This paper aims to study the flow of a nanofluid in the presence of viscous dissipation in an oblique channel (nonparallel plane walls). For thermal conductivity of the nanofluid, the KKL model is utilized. Water is taken as the base fluid and it is assumed to be containing the solid nanoparticles of copper oxide. The appropriate set of partial differential equations is transformed into a self-similar system with the help of feasible similarity transformations. The solution of the model is obtained analytically and to ensure the validity of analytical solutions, numerically one is also calculated. The homotopy analysis method (HAM) and the Runge-Kutta numerical method (coupled with shooting techniques) have been employed for the said purpose. The influence of the different flow parameters in the model on velocity, thermal field, skin friction coefficient and local rate of heat transfer has been discussed with the help of graphs. Furthermore, graphical comparison between the local rate of heat transfer in regular fluids and nanofluids has been made which shows that in case of nanofluids, heat transfer is rapid as compared to regular fluids.
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H.; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter. PMID:24608594
Hayat, Tasawar; Ashraf, Muhammad Bilal; Alsulami, Hamed H; Alhuthali, Muhammad Shahab
2014-01-01
The objective of present research is to examine the thermal radiation effect in three-dimensional mixed convection flow of viscoelastic fluid. The boundary layer analysis has been discussed for flow by an exponentially stretching surface with convective conditions. The resulting partial differential equations are reduced into a system of nonlinear ordinary differential equations using appropriate transformations. The series solutions are developed through a modern technique known as the homotopy analysis method. The convergent expressions of velocity components and temperature are derived. The solutions obtained are dependent on seven sundry parameters including the viscoelastic parameter, mixed convection parameter, ratio parameter, temperature exponent, Prandtl number, Biot number and radiation parameter. A systematic study is performed to analyze the impacts of these influential parameters on the velocity and temperature, the skin friction coefficients and the local Nusselt number. It is observed that mixed convection parameter in momentum and thermal boundary layers has opposite role. Thermal boundary layer is found to decrease when ratio parameter, Prandtl number and temperature exponent are increased. Local Nusselt number is increasing function of viscoelastic parameter and Biot number. Radiation parameter on the Nusselt number has opposite effects when compared with viscoelastic parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langet, Hélène; Laboratoire des Signaux et Systèmes, CentraleSupélec, Gif-sur-Yvette F-91192; Center for Visual Computing, CentraleSupélec, Châtenay-Malabry F-92295
2015-09-15
Purpose: This paper addresses the reconstruction of x-ray cone-beam computed tomography (CBCT) for interventional C-arm systems. Subsampling of CBCT is a significant issue with C-arms due to their slow rotation and to the low frame rate of their flat panel x-ray detectors. The aim of this work is to propose a novel method able to handle the subsampling artifacts generally observed with analytical reconstruction, through a content-driven hierarchical reconstruction based on compressed sensing. Methods: The central idea is to proceed with a hierarchical method where the most salient features (high intensities or gradients) are reconstructed first to reduce the artifactsmore » these features induce. These artifacts are addressed first because their presence contaminates less salient features. Several hierarchical schemes aiming at streak artifacts reduction are introduced for C-arm CBCT: the empirical orthogonal matching pursuit approach with the ℓ{sub 0} pseudonorm for reconstructing sparse vessels; a convex variant using homotopy with the ℓ{sub 1}-norm constraint of compressed sensing, for reconstructing sparse vessels over a nonsparse background; homotopy with total variation (TV); and a novel empirical extension to nonlinear diffusion (NLD). Such principles are implemented with penalized iterative filtered backprojection algorithms. For soft-tissue imaging, the authors compare the use of TV and NLD filters as sparsity constraints, both optimized with the alternating direction method of multipliers, using a threshold for TV and a nonlinear weighting for NLD. Results: The authors show on simulated data that their approach provides fast convergence to good approximations of the solution of the TV-constrained minimization problem introduced by the compressed sensing theory. Using C-arm CBCT clinical data, the authors show that both TV and NLD can deliver improved image quality by reducing streaks. Conclusions: A flexible compressed-sensing-based algorithmic approach is proposed that is able to accommodate for a wide range of constraints. It is successfully applied to C-arm CBCT images that may not be so well approximated by piecewise constant functions.« less
Homotopy Types and Social Theory: Theoretical Foundations of Strategic Dynamics
2016-06-15
actor assesses whether to remain in the current relational pattern or to change a relation type. Further, each contemplated alteration will usually...1a is punitive, seeking advantage from the actions that aroused the negative affect. Axiom 1b is self-aggrandizing, by interpreting (excessively or
AICHA: An atlas of intrinsic connectivity of homotopic areas.
Joliot, Marc; Jobard, Gaël; Naveau, Mikaël; Delcroix, Nicolas; Petit, Laurent; Zago, Laure; Crivello, Fabrice; Mellet, Emmanuel; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie
2015-10-30
Atlases of brain anatomical ROIs are widely used for functional MRI data analysis. Recently, it was proposed that an atlas of ROIs derived from a functional brain parcellation could be advantageous, in particular for understanding how different regions share information. However, functional atlases so far proposed do not account for a crucial aspect of cerebral organization, namely homotopy, i.e. that each region in one hemisphere has a homologue in the other hemisphere. We present AICHA (for Atlas of Intrinsic Connectivity of Homotopic Areas), a functional brain ROIs atlas based on resting-state fMRI data acquired in 281 individuals. AICHA ROIs cover the whole cerebrum, each having 1-homogeneity of its constituting voxels intrinsic activity, and 2-a unique homotopic contralateral counterpart with which it has maximal intrinsic connectivity. AICHA was built in 4 steps: (1) estimation of resting-state networks (RSNs) using individual resting-state fMRI independent components, (2) k-means clustering of voxel-wise group level profiles of connectivity, (3) homotopic regional grouping based on maximal inter-hemispheric functional correlation, and (4) ROI labeling. AICHA includes 192 homotopic region pairs (122 gyral, 50 sulcal, and 20 gray nuclei). As an application, we report inter-hemispheric (homotopic and heterotopic) and intra-hemispheric connectivity patterns at different sparsities. ROI functional homogeneity was higher for AICHA than for anatomical ROI atlases, but slightly lower than for another functional ROI atlas not accounting for homotopy. AICHA is ideally suited for intrinsic/effective connectivity analyses, as well as for investigating brain hemispheric specialization. Copyright © 2015 Elsevier B.V. All rights reserved.
Homotopic solutions for unsteady second grade liquid utilizing non-Fourier double diffusion concept
NASA Astrophysics Data System (ADS)
Sohail, A.; Khan, W. A.; Khan, M.; Shah, S. I. A.
Main purpose of the current work is to investigate the features of unsteady Cattaneo-Christov heat and mass flux models on the second grade fluid over a stretching surface. The characteristics of unsteady Cattaneo-Christov heat and mass flux models are incorporated in the energy and concentration equations. The unsteady Cattaneo-Christov heat and mass flux models are the generalization of Fourier's and Fick's laws in which the time space upper-convected derivative are utilized to describe the heat conduction and mass diffusion phenomena. The suitable transformations are used to alter the governing partial differential equations into the ordinary differential equations. The resulting problem under consideration is solved analytically by using the homotopy analysis method (HAM). The effect of non-dimensional pertinent parameters on the temperature and concentration distribution are deliberated by using graphs and tables. Results show that the temperature and concentration profiles diminish for augmented values of the thermal and concentration relaxation parameters. Additionally, it is perceived that the temperature and concentration profiles are higher in case of classical Fourier's and Fick's laws as compared to non-Fourier's and non-Fick's laws.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ullah, Siraj; Khan, M. Ijaz; Alsaedi, A.; Zaigham Zia, Q. M.
2018-03-01
Here modeling and computations are presented to introduce the novel concept of Darcy-Forchheimer three-dimensional flow of water-based carbon nanotubes with nonlinear thermal radiation and heat generation/absorption. Bidirectional stretching surface induces the flow. Darcy's law is commonly replace by Forchheimer relation. Xue model is implemented for nonliquid transport mechanism. Nonlinear formulation based upon conservation laws of mass, momentum and energy is first modeled and then solved by optimal homotopy analysis technique. Optimal estimations of auxiliary variables are obtained. Importance of influential variables on the velocity and thermal fields is interpreted graphically. Moreover velocity and temperature gradients are discussed and analyzed. Physical interpretation of influential variables is examined.
Effect of Cattaneo-Christov heat flux on Jeffrey fluid flow with variable thermal conductivity
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Javed, Mehwish; Imtiaz, Maria; Alsaedi, Ahmed
2018-03-01
This paper presents the study of Jeffrey fluid flow by a rotating disk with variable thickness. Energy equation is constructed by using Cattaneo-Christov heat flux model with variable thermal conductivity. A system of equations governing the model is obtained by applying boundary layer approximation. Resulting nonlinear partial differential system is transformed to ordinary differential system. Homotopy concept leads to the convergent solutions development. Graphical analysis for velocities and temperature is made to examine the influence of different involved parameters. Thermal relaxation time parameter signifies that temperature for Fourier's heat law is more than Cattaneo-Christov heat flux. A constitutional analysis is made for skin friction coefficient and heat transfer rate. Effects of Prandtl number on temperature distribution and heat transfer rate are scrutinized. It is observed that larger Reynolds number gives illustrious temperature distribution.
A Lyapunov Function Based Remedial Action Screening Tool Using Real-Time Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitra, Joydeep; Ben-Idris, Mohammed; Faruque, Omar
This report summarizes the outcome of a research project that comprised the development of a Lyapunov function based remedial action screening tool using real-time data (L-RAS). The L-RAS is an advanced computational tool that is intended to assist system operators in making real-time redispatch decisions to preserve power grid stability. The tool relies on screening contingencies using a homotopy method based on Lyapunov functions to avoid, to the extent possible, the use of time domain simulations. This enables transient stability evaluation at real-time speed without the use of massively parallel computational resources. The project combined the following components. 1. Developmentmore » of a methodology for contingency screening using a homotopy method based on Lyapunov functions and real-time data. 2. Development of a methodology for recommending remedial actions based on the screening results. 3. Development of a visualization and operator interaction interface. 4. Testing of screening tool, validation of control actions, and demonstration of project outcomes on a representative real system simulated on a Real-Time Digital Simulator (RTDS) cluster. The project was led by Michigan State University (MSU), where the theoretical models including homotopy-based screening, trajectory correction using real-time data, and remedial action were developed and implemented in the form of research-grade software. Los Alamos National Laboratory (LANL) contributed to the development of energy margin sensitivity dynamics, which constituted a part of the remedial action portfolio. Florida State University (FSU) and Southern California Edison (SCE) developed a model of the SCE system that was implemented on FSU's RTDS cluster to simulate real-time data that was streamed over the internet to MSU where the L-RAS tool was executed and remedial actions were communicated back to FSU to execute stabilizing controls on the simulated system. LCG Consulting developed the visualization and operator interaction interface, based on specifications provided by MSU. The project was performed from October 2012 to December 2016, at the end of which the L-RAS tool, as described above, was completed and demonstrated. The project resulted in the following innovations and contributions: (a) the L-RAS software prototype, tested on a simulated system, vetted by utility personnel, and potentially ready for wider testing and commercialization; (b) an RTDS-based test bed that can be used for future research in the field; (c) a suite of breakthrough theoretical contributions to the field of power system stability and control; and (d) a new tool for visualization of power system stability margins. While detailed descriptions of the development and implementation of the various project components have been provided in the quarterly reports, this final report provides an overview of the complete project, and is demonstrated using public domain test systems commonly used in the literature. The SCE system, and demonstrations thereon, are not included in this report due to Critical Energy Infrastructure Information (CEII) restrictions.« less
Homotopic connectivity in drug-naïve, first-episode, early-onset schizophrenia
Li, Hui-Jie; Xu, Yong; Zhang, Ke-Rang; Hoptman, Matthew J.; Zuo, Xi-Nian
2014-01-01
Background The disconnection hypothesis of schizophrenia has been extensively tested in adults. Recent studies have reported the presence of brain disconnection in younger patients, adding evidence to support the neurodevelopmental hypothesis of schizophrenia. Because of drug confounds in chronic and medicated patients, it has been extremely challenging for researchers to directly investigate abnormalities in the development of connectivity and their role in the pathophysiology of schizophrenia. The present study aimed to examine functional homotopy – a measure of interhemispheric connection – and its relevance to clinical symptoms in first-episode drug-naïve early-onset schizophrenia (EOS) patients. Methods Resting-state functional magnetic resonance imaging was performed in 26 first-episode drug-naïve EOS patients (age: 14.5 ± 1.94, 13 males) and 25 matched typically developing controls (TDCs) (age: 14.4 ± 2.97, 13 males). We were mainly concerned with the functional connectivity between any pair of symmetric inter-hemispheric voxels (i.e., functional homotopy) measured by voxel-mirrored homotopic connectivity (VMHC). Results EOS patients exhibited both global and regional VMHC reductions in comparison with TDCs. Reduced VMHC values were observed within the superior temporal cortex and postcentral gyrus. These interhemispheric synchronization deficits were negatively correlated with negative symptom of the Positive and Negative Syndrome Scale. Moreover, regions of interest analyses based on left and right clusters of temporal cortex and postcentral gyrus revealed abnormal heterotopic connectivity in EOS patients. Conclusions Our findings provide novel neurodevelopmental evidence for the disconnection hypothesis of schizophrenia and suggest that these alterations occur early in the course of the disease and are independent of medication status. PMID:25130214
Development of Tokamak Transport Solvers for Stiff Confinement Systems
NASA Astrophysics Data System (ADS)
St. John, H. E.; Lao, L. L.; Murakami, M.; Park, J. M.
2006-10-01
Leading transport models such as GLF23 [1] and MM95 [2] describe turbulent plasma energy, momentum and particle flows. In order to accommodate existing transport codes and associated solution methods effective diffusivities have to be derived from these turbulent flow models. This can cause significant problems in predicting unique solutions. We have developed a parallel transport code solver, GCNMP, that can accommodate both flow based and diffusivity based confinement models by solving the discretized nonlinear equations using modern Newton, trust region, steepest descent and homotopy methods. We present our latest development efforts, including multiple dynamic grids, application of two-level parallel schemes, and operator splitting techniques that allow us to combine flow based and diffusivity based models in tokamk simulations. 6pt [1] R.E. Waltz, et al., Phys. Plasmas 4, 7 (1997). [2] G. Bateman, et al., Phys. Plasmas 5, 1793 (1998).
Damage identification of supporting structures with a moving sensory system
NASA Astrophysics Data System (ADS)
Zhu, X. Q.; Law, S. S.; Huang, L.; Zhu, S. Y.
2018-02-01
An innovative approach to identify local anomalies in a structural beam bridge with an instrumented vehicle moving as a sensory system across the bridge. Accelerations at both the axle and vehicle body are measured from which vehicle-bridge interaction force on the structure is determined. Local anomalies of the structure are estimated from this interaction force with the Newton's iterative method basing on the homotopy continuation method. Numerical results with the vehicle moving over simply supported or continuous beams show that the acceleration responses from the vehicle or the bridge structure are less sensitive to the local damages than the interaction force between the wheel and the structure. Effects of different movement patterns and moving speed of the vehicle are investigated, and the effect of measurement noise on the identified results is discussed. A heavier or slower vehicle has been shown to be less sensitive to measurement noise giving more accurate results.
NASA Astrophysics Data System (ADS)
Sayevand, K.; Pichaghchi, K.
2018-04-01
In this paper, we were concerned with the description of the singularly perturbed boundary value problems in the scope of fractional calculus. We should mention that, one of the main methods used to solve these problems in classical calculus is the so-called matched asymptotic expansion method. However we shall note that, this was not achievable via the existing classical definitions of fractional derivative, because they do not obey the chain rule which one of the key elements of the matched asymptotic expansion method. In order to accommodate this method to fractional derivative, we employ a relatively new derivative so-called the local fractional derivative. Using the properties of local fractional derivative, we extend the matched asymptotic expansion method to the scope of fractional calculus and introduce a reliable new algorithm to develop approximate solutions of the singularly perturbed boundary value problems of fractional order. In the new method, the original problem is partitioned into inner and outer solution equations. The reduced equation is solved with suitable boundary conditions which provide the terminal boundary conditions for the boundary layer correction. The inner solution problem is next solved as a solvable boundary value problem. The width of the boundary layer is approximated using appropriate resemblance function. Some theoretical results are established and proved. Some illustrating examples are solved and the results are compared with those of matched asymptotic expansion method and homotopy analysis method to demonstrate the accuracy and efficiency of the method. It can be observed that, the proposed method approximates the exact solution very well not only in the boundary layer, but also away from the layer.
NASA Astrophysics Data System (ADS)
Nasir, Saleem; Islam, Saeed; Gul, Taza; Shah, Zahir; Khan, Muhammad Altaf; Khan, Waris; Khan, Aurang Zeb; Khan, Saima
2018-05-01
In this article the modeling and computations are exposed to introduce the new idea of MHD three-dimensional rotating flow of nanofluid through a stretching sheet. Single wall carbon nanotubes (SWCNTs) are utilized as a nano-sized materials while water is used as a base liquid. Single-wall carbon nanotubes (SWNTs) parade sole assets due to their rare structure. Such structure has significant optical and electronics features, wonderful strength and elasticity, and high thermal and chemical permanence. The heat exchange phenomena are deliberated subject to thermal radiation and moreover the impact of nanoparticles Brownian motion and thermophoresis are involved in the present investigation. For the nanofluid transport mechanism, we implemented the Xue model (Xue, Phys B Condens Matter 368:302-307, 2005). The governing nonlinear formulation based upon the law of conservation of mass, quantity of motion, thermal field and nanoparticles concentrations is first modeled and then solved by homotopy analysis method (HAM). Moreover, the graphical result has been exposed to investigate that in what manner the velocities, heat and nanomaterial concentration distributions effected through influential parameters. The mathematical facts of skin friction, Nusselt number and Sherwood number are presented through numerical data for SWCNTs.
Flow and heat transfer in water based liquid film fluids dispensed with graphene nanoparticles
NASA Astrophysics Data System (ADS)
Zuhra, Samina; Khan, Noor Saeed; Khan, Muhammad Altaf; Islam, Saeed; Khan, Waris; Bonyah, Ebenezer
2018-03-01
The unsteady flow and heat transfer characteristics of electrically conducting water based thin liquid film non-Newtonian (Casson and Williamson) nanofluids dispensed with graphene nanoparticles past a stretching sheet are considered in the presence of transverse magnetic field and non-uniform heat source/sink. Embedding the graphene nanoparticles effectively amplifies the thermal conductivity of Casson and Williamson nanofluids. Ordinary differential equations together with the boundary conditions are obtained through similarity variables from the governing equations of the problem, which are solved by the HAM (Homotopy Analysis Method). The solution is expressed through graphs and illustrated which show the influences of all the parameters. The convergence of the HAM solution for the linear operators is obtained. Favorable comparison with previously published research paper is performed to show the correlation for the present work. Skin friction coefficient and Nusselt number are presented through Tables and graphs which show the validation for the achieved results demonstrating that the thin liquid films results from this study are in close agreement with the results reported in the literature. Results achieved by HAM and residual errors are evaluated numerically, given in Tables and also depicted graphically which show the accuracy of the present work.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
The study of the Boltzmann equation of solid-gas two-phase flow with three-dimensional BGK model
NASA Astrophysics Data System (ADS)
Liu, Chang-jiang; Pang, Song; Xu, Qiang; He, Ling; Yang, Shao-peng; Qing, Yun-jie
2018-06-01
The motion of many solid-gas two-phase flows can be described by the Boltzmann equation. In order to simplify the Boltzmann equation, the convective-diffusion term is reserved and the collision term is replaced by the three-dimensional Bharnagar-Gross-Krook (BGK) model. Then the simplified Boltzmann equation is solved by homotopy perturbation method (HPM), and its approximate analytical solution is obtained. Through the analyzing, it is proved that the analytical solution satisfies all the constraint conditions, and its formation is in accord with the formation of the solution that is obtained by traditional Chapman-Enskog method, and the solving process of HPM is much more simple and convenient. This preliminarily shows the effectiveness and rapidness of HPM to solve the Boltzmann equation. The results obtained herein provide some theoretical basis for the further study of dynamic model of solid-gas two-phase flows, such as the sturzstrom of high-speed distant landslide caused by microseism and the sand storm caused by strong breeze.
Molecular dynamics simulation of a needle-sphere binary mixture
NASA Astrophysics Data System (ADS)
Raghavan, Karthik
This paper investigates the dynamic behaviour of a hard needle-sphere binary system using a novel numerical technique called the Newton homotopy continuation (NHC) method. This mixture is representative of a polymer melt where both long chain molecules and monomers coexist. Since the intermolecular forces are generated from hard body interactions, the consequence of missed collisions or incorrect collision sequences have a significant bearing on the dynamic properties of the fluid. To overcome this problem, in earlier work NHC was chosen over traditional Newton-Raphson methods to solve the hard body dynamics of a needle fluid in random media composed of overlapping spheres. Furthermore, the simplicity of interactions and dynamics allows us to focus our research directly on the effects of particle shape and density on the transport behaviour of the mixture. These studies are also compared with earlier works that examined molecular chains in porous media primarily to understand the differences in molecular transport in the bulk versus porous systems.
OPTICON: Pro-Matlab software for large order controlled structure design
NASA Technical Reports Server (NTRS)
Peterson, Lee D.
1989-01-01
A software package for large order controlled structure design is described and demonstrated. The primary program, called OPTICAN, uses both Pro-Matlab M-file routines and selected compiled FORTRAN routines linked into the Pro-Matlab structure. The program accepts structural model information in the form of state-space matrices and performs three basic design functions on the model: (1) open loop analyses; (2) closed loop reduced order controller synthesis; and (3) closed loop stability and performance assessment. The current controller synthesis methods which were implemented in this software are based on the Generalized Linear Quadratic Gaussian theory of Bernstein. In particular, a reduced order Optimal Projection synthesis algorithm based on a homotopy solution method was successfully applied to an experimental truss structure using a 58-state dynamic model. These results are presented and discussed. Current plans to expand the practical size of the design model to several hundred states and the intention to interface Pro-Matlab to a supercomputing environment are discussed.
Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.
2009-01-01
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727
Bahri, A; Bendersky, M; Cohen, F R; Gitler, S
2009-07-28
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A fast and versatile procedure for algebraically generating boundary conforming computational grids for use with finite-volume Euler flow solvers is presented. A semi-analytic homotopic procedure is used to generate the grids. Grids generated in two-dimensional planes are stacked to produce quasi-three-dimensional grid systems. The body surface and outer boundary are described in terms of surface parameters. An interpolation scheme is used to blend between the body surface and the outer boundary in order to determine the field points. The method, albeit developed for analytically generated body geometries is equally applicable to other classes of geometries. The method can be used for both internal and external flow configurations, the only constraint being that the body geometries be specified in two-dimensional cross-sections stationed along the longitudinal axis of the configuration. Techniques for controlling various grid parameters, e.g., clustering and orthogonality are described. Techniques for treating problems arising in algebraic grid generation for geometries with sharp corners are addressed. A set of representative grid systems generated by this method is included. Results of flow computations using these grids are presented for validation of the effectiveness of the method.
MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero
NASA Astrophysics Data System (ADS)
Bogner, Christian
2016-06-01
We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.
D{sub {infinity}}-differential A{sub {infinity}}-algebras and spectral sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapin, S V
2002-02-28
In the present paper the construction of a D{sub {infinity}}-differential A{sub {infinity}}-(co)algebra is introduced and basic homotopy properties of this construction are studied. The connection between D{sub {infinity}}-differential A{sub {infinity}}-(co)algebras and spectral sequences is established, which enables us to construct the structure of an A{sub {infinity}} -coalgebra on the Milnor coalgebra directly from the differentials of the Adams spectral sequence.
NASA Astrophysics Data System (ADS)
Khan, Aamir; Shah, Rehan Ali; Shuaib, Muhammad; Ali, Amjad
2018-06-01
The effects of magnetic field dependent (MFD) thermosolutal convection and MFD viscosity of the fluid dynamics are investigated between squeezing discs rotating with different velocities. The unsteady constitutive expressions of mass conservation, modified Navier-Stokes, Maxwell and MFD thermosolutal convection are coupled as a system of ordinary differential equations. The corresponding solutions for the transformed radial and azimuthal momentum as well as solutions for the azimuthal and axial induced magnetic field equations are determined, also the MHD pressure and torque which the fluid exerts on the upper disc is derived and discussed in details. In the case of smooth discs the self-similar equations are solved using Homotopy Analysis Method (HAM) with appropriate initial guesses and auxiliary parameters to produce an algorithm with an accelerated and assured convergence. The validity and accuracy of HAM results is proved by comparison of the HAM solutions with numerical solver package BVP4c. It has been shown that magnetic Reynolds number causes to decrease magnetic field distributions, fluid temperature, axial and tangential velocity. Also azimuthal and axial components of magnetic field have opposite behavior with increase in MFD viscosity. Applications of the study include automotive magneto-rheological shock absorbers, novel aircraft landing gear systems, heating up or cooling processes, biological sensor systems and biological prosthetic etc.
NASA Astrophysics Data System (ADS)
Khan, Zeeshan; Khan, Ilyas; Ullah, Murad; Tlili, I.
2018-06-01
In this work, we discuss the unsteady flow of non-Newtonian fluid with the properties of heat source/sink in the presence of thermal radiation moving through a binary mixture embedded in a porous medium. The basic equations of motion including continuity, momentum, energy and concentration are simplified and solved analytically by using Homotopy Analysis Method (HAM). The energy and concentration fields are coupled with Dankohler and Schmidt numbers. By applying suitable transformation, the coupled nonlinear partial differential equations are converted to couple ordinary differential equations. The effect of physical parameters involved in the solutions of velocity, temperature and concentration profiles are discussed by assign numerical values and results obtained shows that the velocity, temperature and concentration profiles are influenced appreciably by the radiation parameter, Prandtl number, suction/injection parameter, reaction order index, solutal Grashof number and the thermal Grashof. It is observed that the non-Newtonian parameter H leads to an increase in the boundary layer thickness. It was established that the Prandtl number decreases thee thermal boundary layer thickness which helps in maintaining system temperature of the fluid flow. It is observed that the temperature profiles higher for heat source parameter and lower for heat sink parameter throughout the boundary layer. Fromm this simulation it is analyzed that an increase in the Schmidt number decreases the concentration boundary layer thickness. Additionally, for the sake of comparison numerical method (ND-Solve) and Adomian Decomposition Method are also applied and good agreement is found.
Characterization and control of self-motions in redundant manipulators
NASA Technical Reports Server (NTRS)
Burdick, J.; Seraji, Homayoun
1989-01-01
The presence of redundant degrees of freedom in a manipulator structure leads to a physical phenomenon known as a self-motion, which is a continuous motion of the manipulator joints that leaves the end-effector motionless. In the first part of the paper, a global manifold mapping reformulation of manipulator kinematics is reviewed, and the inverse kinematic solution for redundant manipulators is developed in terms of self-motion manifolds. Global characterizations of the self-motion manifolds in terms of their number, geometry, homotopy class, and null space are reviewed using examples. Much previous work in redundant manipulator control has been concerned with the redundancy resolution problem, in which methods are developed to determine, or resolve, the motion of the joints in order to achieve end-effector trajectory control while optimizing additional objective functions. Redundancy resolution problems can be equivalently posed as the control of self-motions. Alternatives for redundancy resolution are briefly discussed.
NASA Astrophysics Data System (ADS)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
Differentials on graph complexes II: hairy graphs
NASA Astrophysics Data System (ADS)
Khoroshkin, Anton; Willwacher, Thomas; Živković, Marko
2017-10-01
We study the cohomology of the hairy graph complexes which compute the rational homotopy of embedding spaces, generalizing the Vassiliev invariants of knot theory. We provide spectral sequences converging to zero whose first pages contain the hairy graph cohomology. Our results yield a way to construct many nonzero hairy graph cohomology classes out of (known) non-hairy classes by studying the cancellations in those sequences. This provide a first glimpse at the tentative global structure of the hairy graph cohomology.
NASA Astrophysics Data System (ADS)
Howell, Nicholas L.
This thesis introduces two notions of motive associated to a log scheme. We introduce a category of log motives a la Voevodsky, and prove that the embedding of Voevodsky motives is an equivalence, in particular proving that any homotopy-invariant cohomology theory of schemes extends uniquely to log schemes. In the case of a log smooth degeneration, we give an explicit construction of the motivic Albanese of the degeneration, and show that the Hodge realization of this construction gives the Albanese of the limit Hodge structure.
Efficient Reconstruction of Block-Sparse Signals
2011-01-26
PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION ...while solving (1) for all sparsity levels of X. The rest of thi s paper is organized as follows. In Section 2, we extend the homotopy technique in [5...constraints can be used. (4) (5) Let Coff represent a subset of the positive integers less than or equal 10 N such that k E Coff implies x(k) = O. Let
Morse homotopy and Chern-Simons perturbation theory
NASA Astrophysics Data System (ADS)
Fukaya, Kenji
1996-11-01
We define and invariant of a three manifold equipped with a flat bundle with vanishing homology. The construction is based on Morse theory using several Morse functions simultaneously and is regarded as a higher loop analogue of various product operations in algebraic topology. There is a heuristic argument that this invariant is related to perturbative Chern-Simons Gauge theory by Axelrod-Singer, etc. There is also a theorem which gives a relation of the construction to open string theory on the cotangent bundle.
NASA Technical Reports Server (NTRS)
Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.
1993-01-01
In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.
NASA Astrophysics Data System (ADS)
Madaki, A. G.; Roslan, R.; Kandasamy, R.; Chowdhury, M. S. H.
2017-04-01
In this paper, the effects of Brownian motion, thermophoresis, chemical reaction, heat generation, magnetohydrodynamic and thermal radiation has been included in the model of nanofluid flow and heat transfer over a moving surface with variable thickness. The similarity transformation is used to transform the governing boundary layer equations into ordinary differential equations (ODE). Both optimal homotopy asymptotic method (OHAM) and Runge-Kutta fourth order method with shooting technique are employed to solve the resulting ODEs. For different values of the pertinent parameters on the velocity, temperature and concentration profiles have been studied and details are given in tables and graphs respectively. A comparison with the previous study is made, where an excellent agreement is achieved. The results demonstrate that the radiation parameter N increases, with the increase in both the temperature and the thermal boundary layer thickness respectively. While the nanoparticles concentration profiles increase with the influence of generative chemical reaction γ < 0, while it decreases with destructive chemical reaction γ > 0.
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
Super-Lie n-algebra extensions, higher WZW models and super-p-branes with tensor multiplet fields
NASA Astrophysics Data System (ADS)
Fiorenza, Domenico; Sati, Hisham; Schreiber, Urs
2015-12-01
We formalize higher-dimensional and higher gauge WZW-type sigma-model local prequantum field theory, and discuss its rationalized/perturbative description in (super-)Lie n-algebra homotopy theory (the true home of the "FDA"-language used in the supergravity literature). We show generally how the intersection laws for such higher WZW-type σ-model branes (open brane ending on background brane) are encoded precisely in (super-)L∞-extension theory and how the resulting "extended (super-)space-times" formalize spacetimes containing σ-model brane condensates. As an application we prove in Lie n-algebra homotopy theory that the complete super-p-brane spectrum of superstring/M-theory is realized this way, including the pure σ-model branes (the "old brane scan") but also the branes with tensor multiplet worldvolume fields, notably the D-branes and the M5-brane. For instance the degree-0 piece of the higher symmetry algebra of 11-dimensional (11D) spacetime with an M2-brane condensate turns out to be the "M-theory super-Lie algebra". We also observe that in this formulation there is a simple formal proof of the fact that type IIA spacetime with a D0-brane condensate is the 11D sugra/M-theory spacetime, and of (prequantum) S-duality for type IIB string theory. Finally we give the non-perturbative description of all this by higher WZW-type σ-models on higher super-orbispaces with higher WZW terms in stacky differential cohomology.
NASA Astrophysics Data System (ADS)
Myrheim, J.
Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois
2011-01-01
We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over themore » Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.« less
Cheng, Shaobo; Li, Jun; Han, Myung-Geun; ...
2017-04-05
Here, we report structural transformation of sixfold vortex domains into two-, four-, and eightfold vortices via a different type of topological defect in hexagonal manganites. Combining high-resolution electron microscopy and Landau-theory-based numerical simulations, we also investigate the remarkable atomic arrangement and the intertwined relationship between the vortex structures and the topological defects. The roles of their displacement field, formation temperature, and nucleation sites are revealed. All conceivable vortices in the system are topologically classified using homotopy group theory, and their origins are identified.
CMCpy: Genetic Code-Message Coevolution Models in Python
Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367
NASA Astrophysics Data System (ADS)
Khan, Sami Ullah; Shehzad, Sabir Ali; Rauf, Amar; Ali, Nasir
2018-03-01
The aim of this article is to highlight the unsteady mixed convective couple stress nanoliquid flow passed through stretching surface. The flow is generated due to periodic oscillations of sheet. An appropriate set of dimensionless variables are used to reduce the independent variables in governing equations arising from mathematical modeling. An analytical solution has been computed by employing the technique of homotopy method. The outcomes of various sundry parameters like couple stress parameter, the ratio of angular velocity to stretching rate, thermophoresis parameter, Hartmann number, Prandtl number, heat source/sink parameter, Schmidt number described graphically and in tabular form. It is observed that the velocity profile increases by increasing mixed convection parameter and concentration buoyancy parameter. The temperature enhances for larger values of Hartmann number and Brownian. The concentration profile increases by increasing thermophoresis parameter. Results show that wall shear stress increases by increasing couple stress parameter and ratio of oscillating frequency to stretching rate.
Coherent Structure Detection using Persistent Homology and other Topological Tools
NASA Astrophysics Data System (ADS)
Smith, Spencer; Roberts, Eric; Sindi, Suzanne; Mitchell, Kevin
2017-11-01
For non-autonomous, aperiodic fluid flows, coherent structures help organize the dynamics, much as invariant manifolds and periodic orbits do for autonomous or periodic systems. The prevalence of such flows in nature and industry has motivated many successful techniques for defining and detecting coherent structures. However, often these approaches require very fine trajectory data to reconstruct velocity fields and compute Cauchy-Green-tensor-related quantities. We use topological techniques to help detect coherent trajectory sets in relatively sparse 2D advection problems. More specifically, we have developed a homotopy-based algorithm, the ensemble-based topological entropy calculation (E-tec), which assigns to each edge in an initial triangulation of advected points a topologically forced lower bound on its future stretching rate. The triangulation and its weighted edges allow us to analyze flows using persistent homology. This topological data analysis tool detects clusters and loops in the triangulation that are robust in the presence of noise and in this case correspond to coherent trajectory sets.
Time-optical spinup maneuvers of flexible spacecraft
NASA Technical Reports Server (NTRS)
Singh, G.; Kabamba, P. T.; Mcclamroch, N. H.
1990-01-01
Attitude controllers for spacecraft have been based on the assumption that the bodies being controlled are rigid. Future spacecraft, however, may be quite flexible. Many applications require spinning up/down these vehicles. In this work the minimum time control of these maneuvers is considered. The time-optimal control is shown to possess an important symmetry property. Taking advantage of this property, the necessary and sufficient conditions for optimality are transformed into a system of nonlinear algebraic equations in the control switching times during one half of the maneuver, the maneuver time, and the costates at the mid-maneuver time. These equations can be solved using a homotopy approach. Control spillover measures are introduced and upper bounds on these measures are obtained. For a special case these upper bounds can be expressed in closed form for an infinite dimensional evaluation model. Rotational stiffening effects are ignored in the optimal control analysis. Based on a heuristic argument a simple condition is given which justifies the omission of these nonlinear effects. This condition is validated by numerical simulation.
MHD biconvective flow of Powell Eyring nanofluid over stretched surface
NASA Astrophysics Data System (ADS)
Naseem, Faiza; Shafiq, Anum; Zhao, Lifeng; Naseem, Anum
2017-06-01
The present work is focused on behavioral characteristics of gyrotactic microorganisms to describe their role in heat and mass transfer in the presence of magnetohydrodynamic (MHD) forces in Powell-Eyring nanofluids. Implications concerning stretching sheet with respect to velocity, temperature, nanoparticle concentration and motile microorganism density were explored to highlight influential parameters. Aim of utilizing microorganisms was primarily to stabilize the nanoparticle suspension due to bioconvection generated by the combined effects of buoyancy forces and magnetic field. Influence of Newtonian heating was also analyzed by taking into account thermophoretic mechanism and Brownian motion effects to insinuate series solutions mediated by homotopy analysis method (HAM). Mathematical model captured the boundary layer regime that explicitly involved contemporary non linear partial differential equations converted into the ordinary differential equations. To depict nanofluid flow characteristics, pertinent parameters namely bioconvection Lewis number Lb, traditional Lewis number Le, bioconvection Péclet number Pe, buoyancy ratio parameter Nr, bioconvection Rayleigh number Rb, thermophoresis parameter Nt, Hartmann number M, Grashof number Gr, and Eckert number Ec were computed and analyzed. Results revealed evidence of hydromagnetic bioconvection for microorganism which was represented by graphs and tables. Our findings further show a significant effect of Newtonian heating over a stretching plate by examining the coefficient values of skin friction, local Nusselt number and the local density number. Comparison was made between Newtonian fluid and Powell-Eyring fluid on velocity field and temperature field. Results are compared of with contemporary studies and our findings are found in excellent agreement with these studies.
NASA Astrophysics Data System (ADS)
Khan, M.; Irfan, M.; Khan, W. A.
2017-12-01
Nanoliquids retain remarkable features that have fascinated various researchers owing to their utilization in nanoscience and nanotechnology. We will present a mathematical relation for 3D forced convective heat and mass transfer mechanism of a Carreau nanoliquid over a bidirectional stretched surface. Additionally, the features of heat source/sink and nonlinear thermal radiation are considered for the 3D Carreau nanoliquid. The governing nonlinear PDEs are established and altered into a set of nonlinear ODEs by utilizing a suitable conversion. A numerical approach, namely the bvp4c is adopted to resolve the resultant equations. The achieved outcomes are schemed and conferred in detail for somatic parameters. It is realized that amassed values of Brownian motion parameter Nb lead to enhance the temperature of the Carreau nanoliquid while quite conflicting behavior is being noticed for the concentration of the Carreau nanoliquid. Moreover, it is also noted that the influence of heat source δ > 0 is relatively antithetic to heat sink δ < 0 parameter, whereas an analogous impact is being identified for thermal Biot number γ on temperature and concentration Biot number γ1 on concentration of the Carreau nanoliquid for shear thinning/thickening liquids. Additionally, an assessment between the analytical technique, namely the homotopy analysis method (HAM) and the numerical scheme bvp4c is presented graphically, as well as in tabular form. From these comparisons we initiate a splendid communication with these results.
Research in robust control for hypersonic aircraft
NASA Technical Reports Server (NTRS)
Calise, A. J.
1994-01-01
The research during the third reporting period focused on fixed order robust control design for hypersonic vehicles. A new technique was developed to synthesize fixed order H(sub infinity) controllers. A controller canonical form is imposed on the compensator structure and a homotopy algorithm is employed to perform the controller design. Various reduced order controllers are designed for a simplified version of the hypersonic vehicle model used in our previous studies to demonstrate the capabilities of the code. However, further work is needed to investigate the issue of numerical ill-conditioning for large order systems and to make the numerical approach more reliable.
Deforestation of Peano continua and minimal deformation retracts☆
Conner, G.; Meilstrup, M.
2012-01-01
Every Peano continuum has a strong deformation retract to a deforested continuum, that is, one with no strongly contractible subsets attached at a single point. In a deforested continuum, each point with a one-dimensional neighborhood is either fixed by every self-homotopy of the space, or has a neighborhood which is a locally finite graph. A minimal deformation retract of a continuum (if it exists) is called its core. Every one-dimensional Peano continuum has a unique core, which can be obtained by deforestation. We give examples of planar Peano continua that contain no core but are deforested. PMID:23471120
The Stack of Yang-Mills Fields on Lorentzian Manifolds
NASA Astrophysics Data System (ADS)
Benini, Marco; Schenkel, Alexander; Schreiber, Urs
2018-03-01
We provide an abstract definition and an explicit construction of the stack of non-Abelian Yang-Mills fields on globally hyperbolic Lorentzian manifolds. We also formulate a stacky version of the Yang-Mills Cauchy problem and show that its well-posedness is equivalent to a whole family of parametrized PDE problems. Our work is based on the homotopy theoretical approach to stacks proposed in Hollander (Isr. J. Math. 163:93-124, 2008), which we shall extend by further constructions that are relevant for our purposes. In particular, we will clarify the concretification of mapping stacks to classifying stacks such as BG con.
Fiber-connected, indefinite Morse 2-functions on connected n-manifolds
Gay, David T.; Kirby, Robion C.
2011-01-01
We discuss generic smooth maps from smooth manifolds to smooth surfaces, which we call “Morse 2-functions,” and homotopies between such maps. The two central issues are to keep the fibers connected, in which case the Morse 2-function is “fiber-connected,” and to avoid local extrema over one-dimensional submanifolds of the range, in which case the Morse 2-function is “indefinite.” This is foundational work for the long-range goal of defining smooth invariants from Morse 2-functions using tools analogous to classical Morse homology and Cerf theory. PMID:21518894
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillioz, M.; von Manteuffel, A.; Schwaller, P.
We study skyrmions in the littlest Higgs model and discuss their possible role as dark matter candidates. Stable massive skyrmions can exist in the littlest Higgs model also in absence of an exact parity symmetry, since they carry a conserved topological charge due to the non-trivial third homotopy group of the SU(5)/SO(5) coset. We find a spherically symmetric skyrmion solution in this coset. The effects of gauge fields on the skyrmion solutions are analyzed and found to lead to an upper bound on the skyrmion mass. The relic abundance is in agreement with the observed dark matter density for reasonablemore » parameter choices.« less
NASA Astrophysics Data System (ADS)
Alshomrani, Ali Saleh; Gul, Taza
2017-11-01
This study is related with the analysis of spray distribution considering a nanofluid thin layer over the slippery and stretching surface of a cylinder with thermal radiation. The distribution of the spray rate is designated as a function of the nanolayer thickness. The applied temperature used during spray phenomenon has been assumed as a reference temperature with the addition of the viscous dissipation term. The diverse behavior of the thermal radiation with magnetic and chemical reaction has been cautiously observed, which has consequences in causing variations in the spray distribution and heat transmission. Nanofluids have been used as water-based like Al2O3-H2O, Cu- H2O and have been examined under the consideration of momentum and thermal slip boundary conditions. The basic equations have been transformed into a set of nonlinear equations by using suitable variables for alteration. The approximate results of the problem have been achieved by using the optimal approach of the Homotopy Analysis Method (HAM). We demonstrate our results with the help of the numerical (ND-Solve) method. In addition, we found a close agreement of the two methods which is confirmed through graphs and tables. The rate of the spray pattern under the applied pressure term has also been obtained. The maximum cooling performance has been obtained by using the Cu water with the small values of the magnetic parameter and alumina for large values of the magnetic parameter. The outcomes of the Cu-water and Al2O3-H2O nanofluids have been linked to the published results in the literature. The impact of the physical parameters, like the skin friction coefficient, and the local Nusselt number have also been observed and compared with the published work. The momentum slip and thermal slip parameters, thermal radiation parameter, magnetic parameter and heat generation/absorption parameter effects on the spray rate have been calculated and discussed.
Multiple steady states in atmospheric chemistry
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
1993-01-01
The equations describing the distributions and concentrations of trace species are nonlinear and may thus possess more than one solution. This paper develops methods for searching for multiple physical solutions to chemical continuity equations and applies these to subsets of equations describing tropospheric chemistry. The calculations are carried out with a box model and use two basic strategies. The first strategy is a 'search' method. This involves fixing model parameters at specified values, choosing a wide range of initial guesses at a solution, and using a Newton-Raphson technique to determine if different initial points converge to different solutions. The second strategy involves a set of techniques known as homotopy methods. These do not require an initial guess, are globally convergent, and are guaranteed, in principle, to find all solutions of the continuity equations. The first method is efficient but essentially 'hit or miss' in the sense that it cannot guarantee that all solutions which may exist will be found. The second method is computationally burdensome but can, in principle, determine all the solutions of a photochemical system. Multiple solutions have been found for models that contain a basic complement of photochemical reactions involving O(x), HO(x), NO(x), and CH4. In the present calculations, transitions occur between stable branches of a multiple solution set as a control parameter is varied. These transitions are manifestations of hysteresis phenomena in the photochemical system and may be triggered by increasing the NO flux or decreasing the CH4 flux from current mean tropospheric levels.
Robust doubly charged nodal lines and nodal surfaces in centrosymmetric systems
NASA Astrophysics Data System (ADS)
Bzdušek, Tomáš; Sigrist, Manfred
2017-10-01
Weyl points in three spatial dimensions are characterized by a Z -valued charge—the Chern number—which makes them stable against a wide range of perturbations. A set of Weyl points can mutually annihilate only if their net charge vanishes, a property we refer to as robustness. While nodal loops are usually not robust in this sense, it has recently been shown using homotopy arguments that in the centrosymmetric extension of the AI symmetry class they nevertheless develop a Z2 charge analogous to the Chern number. Nodal loops carrying a nontrivial value of this Z2 charge are robust, i.e., they can be gapped out only by a pairwise annihilation and not on their own. As this is an additional charge independent of the Berry π -phase flowing along the band degeneracy, such nodal loops are, in fact, doubly charged. In this manuscript, we generalize the homotopy discussion to the centrosymmetric extensions of all Atland-Zirnbauer classes. We develop a tailored mathematical framework dubbed the AZ +I classification and show that in three spatial dimensions such robust and multiply charged nodes appear in four of such centrosymmetric extensions, namely, AZ +I classes CI and AI lead to doubly charged nodal lines, while D and BDI support doubly charged nodal surfaces. We remark that no further crystalline symmetries apart from the spatial inversion are necessary for their stability. We provide a description of the corresponding topological charges, and develop simple tight-binding models of various semimetallic and superconducting phases that exhibit these nodes. We also indicate how the concept of robust and multiply charged nodes generalizes to other spatial dimensions.
Simplest bifurcation diagrams for monotone families of vector fields on a torus
NASA Astrophysics Data System (ADS)
Baesens, C.; MacKay, R. S.
2018-06-01
In part 1, we prove that the bifurcation diagram for a monotone two-parameter family of vector fields on a torus has to be at least as complicated as the conjectured simplest one proposed in Baesens et al (1991 Physica D 49 387–475). To achieve this, we define ‘simplest’ by sequentially minimising the numbers of equilibria, Bogdanov–Takens points, closed curves of centre and of neutral saddle, intersections of curves of centre and neutral saddle, Reeb components, other invariant annuli, arcs of rotational homoclinic bifurcation of horizontal homotopy type, necklace points, contractible periodic orbits, points of neutral horizontal homoclinic bifurcation and half-plane fan points. We obtain two types of simplest case, including that initially proposed. In part 2, we analyse the bifurcation diagram for an explicit monotone family of vector fields on a torus and prove that it has at most two equilibria, precisely four Bogdanov–Takens points, no closed curves of centre nor closed curves of neutral saddle, at most two Reeb components, precisely four arcs of rotational homoclinic connection of ‘horizontal’ homotopy type, eight horizontal saddle-node loop points, two necklace points, four points of neutral horizontal homoclinic connection, and two half-plane fan points, and there is no simultaneous existence of centre and neutral saddle, nor contractible homoclinic connection to a neutral saddle. Furthermore, we prove that all saddle-nodes, Bogdanov–Takens points, non-neutral and neutral horizontal homoclinic bifurcations are non-degenerate and the Hopf condition is satisfied for all centres. We also find it has four points of degenerate Hopf bifurcation. It thus provides an example of a family satisfying all the assumptions of part 1 except the one of at most one contractible periodic orbit.
Mehala, N; Rajendran, L; Meena, V
2017-02-01
A mathematical model developed by Abdekhodaie and Wu (J Membr Sci 335:21-31, 2009), which describes a dynamic process involving an enzymatic reaction and diffusion of reactants and product inside glucose-sensitive composite membrane has been discussed. This theoretical model depicts a system of non-linear non-steady state reaction diffusion equations. These equations have been solved using new approach of homotopy perturbation method and analytical solutions pertaining to the concentrations of glucose, oxygen, and gluconic acid are derived. These analytical results are compared with the numerical results, and limiting case results for steady state conditions and a good agreement is observed. The influence of various kinetic parameters involved in the model has been presented graphically. Theoretical evaluation of the kinetic parameters like the maximal reaction velocity (V max ) and Michaelis-Menten constants for glucose and oxygen (K g and K ox ) is also reported. This predicted model is very much useful for designing the glucose-responsive composite membranes for closed-loop insulin delivery.
NASA Technical Reports Server (NTRS)
Norstrud, H.
1973-01-01
The analytical solution to the transonic small perturbation equation which describes steady compressible flow past finite wings at subsonic speeds can be expressed as a nonlinear integral equation with the perturbation velocity potential as the unknown function. This known formulation is substituted by a system of nonlinear algebraic equations to which various methods are applicable for its solution. Due to the presence of mathematical discontinuities in the flow solutions, however, a main computational difficulty was to ensure uniqueness of the solutions when local velocities on the wing exceeded the speed of sound. For continuous solutions this was achieved by embedding the algebraic system in an one-parameter operator homotopy in order to apply the method of parametric differentiation. The solution to the initial system of equations appears then as a solution to a Cauchy problem where the initial condition is related to the accompanying incompressible flow solution. In using this technique, however, a continuous dependence of the solution development on the initial data is lost when the solution reaches the minimum bifurcation point. A steepest descent iteration technique was therefore, added to the computational scheme for the calculation of discontinuous flow solutions. Results for purely subsonic flows and supersonic flows with and without compression shocks are given and compared with other available theoretical solutions.
NASA Astrophysics Data System (ADS)
Ur Rehman, Fiaz; Nadeem, Sohail; Ur Rehman, Hafeez; Ul Haq, Rizwan
2018-03-01
In the present paper a theoretical investigation is performed to analyze heat and mass transport enhancement of water-based nanofluid for three dimensional (3D) MHD stagnation-point flow caused by an exponentially stretched surface. Water is considered as a base fluid. There are three (3) types of nanoparticles considered in this study namely, CuO (Copper oxide), Fe3O4 (Magnetite), and Al2O3 (Alumina) are considered along with water. In this problem we invoked the boundary layer phenomena and suitable similarity transformation, as a result our three dimensional non-linear equations of describing current problem are transmuted into nonlinear and non-homogeneous differential equations involving ordinary derivatives. We solved the final equations by applying homotopy analysis technique. Influential outcomes of aggressing parameters involved in this study, effecting profiles of temperature field and velocity are explained in detail. Graphical results of involved parameters appearing in considered nanofluid are presented separately. It is worth mentioning that Skin-friction along x and y-direction is maximum for Copper oxide-water nanofluid and minimum for Alumina-water nanofluid. Result for local Nusselt number is maximum for Copper oxide-water nanofluid and is minimum for magnetite-water nanofluid.
Sui, Jize; Zhao, Peng; Cheng, Zhengdong; Zheng, Liancun; Zhang, Xinxin
2017-02-01
The rheological and heat-conduction constitutive models of micropolar fluids (MFs), which are important non-Newtonian fluids, have been, until now, characterized by simple linear expressions, and as a consequence, the non-Newtonian performance of such fluids could not be effectively captured. Here, we establish the novel nonlinear constitutive models of a micropolar fluid and apply them to boundary layer flow and heat transfer problems. The nonlinear power law function of angular velocity is represented in the new models by employing generalized " n -diffusion theory," which has successfully described the characteristics of non-Newtonian fluids, such as shear-thinning and shear-thickening fluids. These novel models may offer a new approach to the theoretical understanding of shear-thinning behavior and anomalous heat transfer caused by the collective micro-rotation effects in a MF with shear flow according to recent experiments. The nonlinear similarity equations with a power law form are derived and the approximate analytical solutions are obtained by the homotopy analysis method, which is in good agreement with the numerical solutions. The results indicate that non-Newtonian behaviors involving a MF depend substantially on the power exponent n and the modified material parameter [Formula: see text] introduced by us. Furthermore, the relations of the engineering interest parameters, including local boundary layer thickness, local skin friction, and Nusselt number are found to be fitted by a quadratic polynomial to n with high precision, which enables the extraction of the rapid predictions from a complex nonlinear boundary-layer transport system.
Zhao, Peng; Cheng, Zhengdong; Zheng, Liancun; Zhang, Xinxin
2017-01-01
The rheological and heat-conduction constitutive models of micropolar fluids (MFs), which are important non-Newtonian fluids, have been, until now, characterized by simple linear expressions, and as a consequence, the non-Newtonian performance of such fluids could not be effectively captured. Here, we establish the novel nonlinear constitutive models of a micropolar fluid and apply them to boundary layer flow and heat transfer problems. The nonlinear power law function of angular velocity is represented in the new models by employing generalized “n-diffusion theory,” which has successfully described the characteristics of non-Newtonian fluids, such as shear-thinning and shear-thickening fluids. These novel models may offer a new approach to the theoretical understanding of shear-thinning behavior and anomalous heat transfer caused by the collective micro-rotation effects in a MF with shear flow according to recent experiments. The nonlinear similarity equations with a power law form are derived and the approximate analytical solutions are obtained by the homotopy analysis method, which is in good agreement with the numerical solutions. The results indicate that non-Newtonian behaviors involving a MF depend substantially on the power exponent n and the modified material parameter K0 introduced by us. Furthermore, the relations of the engineering interest parameters, including local boundary layer thickness, local skin friction, and Nusselt number are found to be fitted by a quadratic polynomial to n with high precision, which enables the extraction of the rapid predictions from a complex nonlinear boundary-layer transport system. PMID:28344433
Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body
NASA Astrophysics Data System (ADS)
Wang, Xijing; Li, Jisheng
With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.
Couple stress fluid flow in a rotating channel with peristalsis
NASA Astrophysics Data System (ADS)
Abd elmaboud, Y.; Abdelsalam, Sara I.; Mekheimer, Kh. S.
2018-04-01
This article describes a new model for obtaining closed-form semi-analytical solutions of peristaltic flow induced by sinusoidal wave trains propagating with constant speed on the walls of a two-dimensional rotating infinite channel. The channel rotates with a constant angular speed about the z - axis and is filled with couple stress fluid. The governing equations of the channel deformation and the flow rate inside the channel are derived using the lubrication theory approach. The resulting equations are solved, using the homotopy perturbation method (HPM), for exact solutions to the longitudinal velocity distribution, pressure gradient, flow rate due to secondary velocity, and pressure rise per wavelength. The effect of various values of physical parameters, such as, Taylor's number and couple stress parameter, together with some interesting features of peristaltic flow are discussed through graphs. The trapping phenomenon is investigated for different values of parameters under consideration. It is shown that Taylor's number and the couple stress parameter have an increasing effect on the longitudinal velocity distribution till half of the channel, on the flow rate due to secondary velocity, and on the number of closed streamlines circulating the bolus.
NASA Astrophysics Data System (ADS)
Khan, Zeeshan; Islam, Saeed; Shah, Rehan Ali; Khan, Muhammad Altaf; Bonyah, Ebenezer; Jan, Bilal; Khan, Aurangzeb
Modern optical fibers require a double-layer coating on the glass fiber in order to provide protection from signal attenuation and mechanical damage. The most important plastic resins used in wires and optical fibers are plastic polyvinyl chloride (PVC) and low and high density polyethylene (LDPE/HDPE), nylon and Polysulfone. One of the most important things which affect the final product after processing is the design of the coating die. In the present study, double-layer optical fiber coating is performed using melt polymer satisfying Oldroyd 8-constant fluid model in a pressure type die with the effect of magneto-hydrodynamic (MHD). Wet-on-wet coating process is applied for double-layer optical fiber coating. The coating process in the coating die is modeled as a simple two-layer Couette flow of two immiscible fluids in an annulus with an assigned pressure gradient. Based on the assumptions of fully developed laminar and MHD flow, the Oldroyd 8-constant model of non-Newtonian fluid of two immiscible resin layers is modeled. The governing nonlinear equations are solved analytically by the new technique of Optimal Homotopy Asymptotic Method (OHAM). The convergence of the series solution is established. The results are also verified by the Adomian Decomposition Method (ADM). The effect of important parameters such as magnetic parameter Mi , the dilatant constant α , the Pseodoplastic constant β , the radii ratio δ , the pressure gradient Ω , the speed of fiber optics V , and the viscosity ratio κ on the velocity profiles, thickness of coated fiber optics, volume flow rate, and shear stress on the fiber optics are investigated. At the end the result of the present work is also compared with the experimental results already available in the literature by taking non-Newtonian parameters tends to zero.
Symmetry breaking in smectics and surface models of their singularities
Chen, Bryan Gin-ge; Alexander, Gareth P.; Kamien, Randall D.
2009-01-01
The homotopy theory of topological defects in ordered media fails to completely characterize systems with broken translational symmetry. We argue that the problem can be understood in terms of the lack of rotational Goldstone modes in such systems and provide an alternate approach that correctly accounts for the interaction between translations and rotations. Dislocations are associated, as usual, with branch points in a phase field, whereas disclinations arise as critical points and singularities in the phase field. We introduce a three-dimensional model for two-dimensional smectics that clarifies the topology of disclinations and geometrically captures known results without the need to add compatibility conditions. Our work suggests natural generalizations of the two-dimensional smectic theory to higher dimensions and to crystals. PMID:19717435
Melting Heat in Radiative Flow of Carbon Nanotubes with Homogeneous-Heterogeneous Reactions
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Muhammad, Khursheed; Muhammad, Taseer; Alsaedi, Ahmed
2018-04-01
The present article provides mathematical modeling for melting heat and thermal radiation in stagnation-point flow of carbon nanotubes towards a nonlinear stretchable surface of variable thickness. The process of homogeneous-heterogeneous reactions is considered. Diffusion coefficients are considered equal for both reactant and autocatalyst. Water and gasoline oil are taken as base fluids. The conversion of partial differential system to ordinary differential system is done by suitable transformations. Optimal homotopy technique is employed for the solutions development of velocity, temperature, concentration, skin friction and local Nusselt number. Graphical results for various values of pertinent parameters are displayed and discussed. Our results indicate that the skin friction coefficient and local Nusselt number are enhanced for larger values of nanoparticles volume fraction.
Topology and entanglement in quench dynamics
NASA Astrophysics Data System (ADS)
Chang, Po-Yao
2018-06-01
We classify the topology of the quench dynamics by homotopy groups. A relation between the topological invariant of a postquench order parameter and the topological invariant of a static Hamiltonian is shown in d +1 dimensions (d =1 ,2 ,3 ). The midgap states in the entanglement spectrum of the postquench states reveal their topological nature. When a trivial quantum state is under a sudden quench to a Chern insulator, the midgap states in the entanglement spectrum form rings. These rings are analogous to the boundary Fermi rings in the Hopf insulators. Finally, we show a postquench order parameter in 3+1 dimensions can be characterized by the second Chern number. The number of Dirac cones in the entanglement spectrum is equal to the second Chern number.
Cytoplasmic motion induced by cytoskeleton stretching and its effect on cell mechanics.
Zhang, T
2011-09-01
Cytoplasmic motion assumed as a steady state laminar flow induced by cytoskeleton stretching in a cell is determined and its effect on the mechanical behavior of the cell under externally applied forces is demonstrated. Non-Newtonian fluid is assumed for the multiphase cytoplasmic fluid and the analytical velocity field around the macromolecular chain is obtained by solving the reduced nonlinear momentum equation using homotopy technique. The entropy generation by the fluid internal friction is calculated and incorporated into the entropic elasticity based 8-chain constitutive relations. Numerical examples showed strengthening behavior of cells in response to externally applied mechanical stimuli. The spatial distribution of the stresses within a cell under externally applied fluid flow forces were also studied.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Ahmed, Sohail; Muhammad, Taseer; Alsaedi, Ahmed
2017-10-01
This article examines homogeneous-heterogeneous reactions and internal heat generation in Darcy-Forchheimer flow of nanofluids with different base fluids. Flow is generated due to a nonlinear stretchable surface of variable thickness. The characteristics of nanofluid are explored using CNTs (single and multi walled carbon nanotubes). Equal diffusion coefficients are considered for both reactants and auto catalyst. The conversion of partial differential equations (PDEs) to ordinary differential equations (ODEs) is done via appropriate transformations. Optimal homotopy approach is implemented for solutions development of governing problems. Averaged square residual errors are computed. The optimal solution expressions of velocity, temperature and concentration are explored through plots by using several values of physical parameters. Further the coefficient of skin friction and local Nusselt number are examined through graphs.
Magnetic Helicity of Alfven Simple Waves
NASA Technical Reports Server (NTRS)
Webb, Gary M.; Hu, Q.; Dasgupta, B.; Zank, G. P.; Roberts, D.
2010-01-01
The magnetic helicity of fully nonlinear, multi-dimensional Alfven simple waves are investigated, by using relative helicity formulae and also by using an approach involving poloidal and toroidal decomposition of the magnetic field and magnetic vector potential. Different methods to calculate the magnetic vector potential are used, including the homotopy and Biot-Savart formulas. Two basic Alfven modes are identified: (a) the plane 1D Alfven simple wave given in standard texts, in which the Alfven wave propagates along the z-axis, with wave phase varphi=k_0(z-lambda t), where k_0 is the wave number and lambda is the group velocity of the wave, and (b)\\ the generalized Barnes (1976) simple Alfven wave in which the wave normal {bf n} moves in a circle in the xy-plane perpendicular to the mean field, which is directed along the z-axis. The plane Alfven wave (a) is analogous to the slab Alfven mode and the generalized Barnes solution (b) is analogous to the 2D mode in Alfvenic, incompressible turbulence. The helicity characteristics of these two basic Alfven modes are distinct. The helicity characteristics of more general multi-dimensional simple Alfven waves are also investigated. Applications to nonlinear Aifvenic fluctuations and structures observed in the solar wind are discussed.
NASA Technical Reports Server (NTRS)
Whorton, M. S.
1998-01-01
Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.
Stability of gradient semigroups under perturbations
NASA Astrophysics Data System (ADS)
Aragão-Costa, E. R.; Caraballo, T.; Carvalho, A. N.; Langa, J. A.
2011-07-01
In this paper we prove that gradient-like semigroups (in the sense of Carvalho and Langa (2009 J. Diff. Eqns 246 2646-68)) are gradient semigroups (possess a Lyapunov function). This is primarily done to provide conditions under which gradient semigroups, in a general metric space, are stable under perturbation exploiting the known fact (see Carvalho and Langa (2009 J. Diff. Eqns 246 2646-68)) that gradient-like semigroups are stable under perturbation. The results presented here were motivated by the work carried out in Conley (1978 Isolated Invariant Sets and the Morse Index (CBMS Regional Conference Series in Mathematics vol 38) (RI: American Mathematical Society Providence)) for groups in compact metric spaces (see also Rybakowski (1987 The Homotopy Index and Partial Differential Equations (Universitext) (Berlin: Springer)) for the Morse decomposition of an invariant set for a semigroup on a compact metric space).
Closed-loop endo-atmospheric ascent guidance for reusable launch vehicle
NASA Astrophysics Data System (ADS)
Sun, Hongsheng
This dissertation focuses on the development of a closed-loop endo-atmospheric ascent guidance algorithm for the 2nd generation reusable launch vehicle. Special attention has been given to the issues that impact on viability, complexity and reliability in on-board implementation. The algorithm is called once every guidance update cycle to recalculate the optimal solution based on the current flight condition, taking into account atmospheric effects and path constraints. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode inside atmosphere, and later switch to a closed-loop vacuum ascent guidance scheme. The classical finite difference method is shown to be well suited for fast solution of the constrained optimal three-dimensional ascent problem. The initial guesses for the solutions are generated using an analytical vacuum optimal ascent guidance algorithm. Homotopy method is employed to gradually introduce the aerodynamic forces to generate the optimal solution from the optimal vacuum solution. The vehicle chosen for this study is the Lockheed Martin X-33 lifting-body reusable launch vehicle. To verify the algorithm presented in this dissertation, a series of open-loop and closed-loop tests are performed for three different missions. Wind effects are also studied in the closed-loop simulations. For comparison, the solutions for the same missions are also obtained by two independent optimization softwares. The results clearly establish the feasibility of closed-loop endo-atmospheric ascent guidance of rocket-powered launch vehicles. ATO cases are also tested to assess the adaptability of the algorithm to autonomously incorporate the abort modes.
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Physiological breakdown of Jeffrey six constant nanofluid flow in an endoscope with nonuniform wall
NASA Astrophysics Data System (ADS)
Nadeem, S.; Shaheen, A.; Hussain, S.
2015-12-01
This paper analyse the endoscopic effects of peristaltic nanofluid flow of Jeffrey six-constant fluid model in the presence of magnetohydrodynamics flow. The current problem is modeled in the cylindrical coordinate system and exact solutions are managed (where possible) under low Reynolds number and long wave length approximation. The influence of emerging parameters on temperature and velocity profile are discussed graphically. The velocity equation is solved analytically by utilizing the homotopy perturbation technique strongly, while the exact solutions are computed from temperature equation. The obtained expressions for velocity , concentration and temperature is sketched during graphs and the collision of assorted parameters is evaluate for transform peristaltic waves. The solution depend on thermophoresis number Nt, local nanoparticles Grashof number Gr, and Brownian motion number Nb. The obtained expressions for the velocity, temperature, and nanoparticles concentration profiles are plotted and the impact of various physical parameters are investigated for different peristaltic waves.
Darcy-Forchheimer flow with Cattaneo-Christov heat flux and homogeneous-heterogeneous reactions
Hayat, Tasawar; Haider, Farwa; Alsaedi, Ahmed
2017-01-01
Here Darcy-Forchheimer flow of viscoelastic fluids has been analyzed in the presence of Cattaneo-Christov heat flux and homogeneous-heterogeneous reactions. Results for two viscoelastic fluids are obtained and compared. A linear stretching surface has been used to generate the flow. Flow in porous media is characterized by considering the Darcy-Forchheimer model. Modified version of Fourier's law through Cattaneo-Christov heat flux is employed. Equal diffusion coefficients are employed for both reactants and auto catalyst. Optimal homotopy scheme is employed for solutions development of nonlinear problems. Solutions expressions of velocity, temperature and concentration fields are provided. Skin friction coefficient and heat transfer rate are computed and analyzed. Here the temperature and thermal boundary layer thickness are lower for Cattaneo-Christov heat flux model in comparison to classical Fourier's law of heat conduction. Moreover, the homogeneous and heterogeneous reactions parameters have opposite behaviors for concentration field. PMID:28380014
On the relationship between topological and geometric defects.
Griffin, Sinéad M; Spaldin, Nicola A
2017-08-31
The study of topology in solids is undergoing a renaissance following renewed interest in the properties of ferroic domain walls as well as recent discoveries regarding skyrmionic lattices. Each of these systems possess a property that is 'protected' in a symmetry sense, and is defined rigorously using a branch of mathematics known as topology. In this article we review the formal definition of topological defects as they are classified in terms of homotopy theory, and discuss the precise symmetry-breaking conditions that lead to their formation. We distinguish topological defects from defects that arise from the details of the stacking or structure of the material but are not protected by symmetry, and we propose the term 'geometric defects' to describe the latter. We provide simple material examples of both topological and geometric defect types, and discuss the implications of the classification on the resulting material properties.
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Pose-free structure from motion using depth from motion constraints.
Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G
2011-10-01
Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE
Numeric invariants from multidimensional persistence
Skryzalin, Jacek; Carlsson, Gunnar
2017-05-19
Topological data analysis is the study of data using techniques from algebraic topology. Often, one begins with a finite set of points representing data and a “filter” function which assigns a real number to each datum. Using both the data and the filter function, one can construct a filtered complex for further analysis. For example, applying the homology functor to the filtered complex produces an algebraic object known as a “one-dimensional persistence module”, which can often be interpreted as a finite set of intervals representing various geometric features in the data. If one runs the above process incorporating multiple filtermore » functions simultaneously, one instead obtains a multidimensional persistence module. Unfortunately, these are much more difficult to interpret. In this article, we analyze the space of multidimensional persistence modules from the perspective of algebraic geometry. First we build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence instead of one-dimensional persistence. Fruthermore, we argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Finally, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data. This paper extends the results of Adcock et al. (Homol Homotopy Appl 18(1), 381–402, 2016) by constructing numeric invariants from the computation of a multidimensional persistence module as given by Carlsson et al. (J Comput Geom 1(1), 72–100, 2010).« less
Guo, Wenbin; Jiang, Jiajing; Xiao, Changqing; Zhang, Zhikun; Zhang, Jian; Yu, Liuyu; Liu, Jianrong; Liu, Guiying
2014-01-01
Neuroimaging studies in unaffected siblings of schizophrenia patients can provide clues to the pathophysiology for the development of schizophrenia. However, little is known about the alterations of the interhemispheric resting-state functional connectivity (FC) in siblings, although the dysconnectivity hypothesis is prevailing in schizophrenia for years. In the present study, we used a newly validated voxel-mirrored homotopic connectivity (VMHC) method to identify whether aberrant interhemispheric FC was present in unaffected siblings at increased risk of developing schizophrenia at rest. Forty-six unaffected siblings of schizophrenia patients and 50 age-, sex-, and education-matched healthy controls underwent a resting-state functional magnetic resonance imaging (fMRI). Automated VMHC was used to analyze the data. The sibling group had lower VMHC than the control group in the angular gyrus (AG) and the lingual gyrus/cerebellum lobule VI. No region exhibited higher VMHC in the sibling group than in the control group. There was no significant sex difference of the VMHC values between male siblings and female siblings or between male controls and female controls, although evidence has been accumulated that size and shape of the corpus callosum, and functional homotopy differ between men and women. Our results first suggest that interhemispheric resting-state FC of VMHC is disrupted in unaffected siblings of schizophrenia patients, and add a new clue of abnormal interhemispheric resting-state FC to the pathophysiology for the development of schizophrenia. Copyright © 2013 Elsevier B.V. All rights reserved.
Higher groupoid bundles, higher spaces, and self-dual tensor field equations
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Sämann, Christian; Wolf, Martin
2016-08-01
We develop a description of higher gauge theory with higher groupoids as gauge structure from first principles. This approach captures ordinary gauge theories and gauged sigma models as well as their categorifications on a very general class of (higher) spaces comprising presentable differentiable stacks, as e.g. orbifolds. We start off with a self-contained review on simplicial sets as models of $(\\infty,1)$-categories. We then discuss principal bundles in terms of simplicial maps and their homotopies. We explain in detail a differentiation procedure, suggested by Severa, that maps higher groupoids to $L_\\infty$-algebroids. Generalising this procedure, we define connections for higher groupoid bundles. As an application, we obtain six-dimensional superconformal field theories via a Penrose-Ward transform of higher groupoid bundles over a twistor space. This construction reduces the search for non-Abelian self-dual tensor field equations in six dimensions to a search for the appropriate (higher) gauge structure. The treatment aims to be accessible to theoretical physicists.
The problems in quantum foundations in the light of gauge theories
NASA Astrophysics Data System (ADS)
Ne'Eman, Yuval
1986-04-01
We review the issues of nonseparability and seemingly acausal propagation of information in EPR, as displayed by experiments and the failure of Bell's inequalities. We show that global effects are in the very nature of the geometric structure of modern physical theories, occurring even at the classical level. The Aharonov-Bohm effect, magnetic monopoles, instantons, etc. result from the topology and homotopy features of the fiber bundle manifolds of gauge theories. The conservation of probabilities, a supposedly highly quantum effect, is also achieved through global geometry equations. The EPR observables all fit in such geometries, and space-time is a truncated representation and is not the correct arena for their understanding. Relativistic quantum field theory represents the global action of the measurement operators as the zero-momentum (and therefore spatially infinitely spread) limit of their wave functions (form factors). We also analyze the collapse of the state vector as a case of spontaneous symmetry breakdown in the apparatus-observed state interaction.
On the membrane approximation in isothermal film casting
NASA Astrophysics Data System (ADS)
Hagen, Thomas
2014-08-01
In this work, a one-dimensional model for isothermal film casting is studied. Film casting is an important engineering process to manufacture thin films and sheets from a highly viscous polymer melt. The model equations account for variations in film width and film thickness, and arise from thinness and kinematic assumptions for the free liquid film. The first aspect of our study is a rigorous discussion of the existence and uniqueness of stationary solutions. This objective is approached via the argument principle, exploiting the homotopy invariance of a family of analytic functions. As our second objective, we analyze the linearization of the governing equations about stationary solutions. It is shown that solutions for the associated boundary-initial value problem are given by a strongly continuous semigroup of bounded linear operators. To reach this result, we cast the relevant Cauchy problem in a more accessible form. These transformed equations allow us insight into the regularity of the semigroup, thus yielding the validity of the spectral mapping theorem for the semigroup and the spectrally determined growth property.
NASA Astrophysics Data System (ADS)
Descartes, R.; Rota, G.-C.; Euler, L.; Bernoulli, J. D.; Siegel, Edward Carl-Ludwig
2011-03-01
Quantum-statistics Dichotomy: Fermi-Dirac(FDQS) Versus Bose-Einstein(BEQS), respectively with contact-repulsion/non-condensation(FDCR) versus attraction/ condensationBEC are manifestly-demonstrated by Taylor-expansion ONLY of their denominator exponential, identified BOTH as Descartes analytic-geometry conic-sections, FDQS as Elllipse (homotopy to rectangle FDQS distribution-function), VIA Maxwell-Boltzmann classical-statistics(MBCS) to Parabola MORPHISM, VS. BEQS to Hyperbola, Archimedes' HYPERBOLICITY INEVITABILITY, and as well generating-functions[Abramowitz-Stegun, Handbook Math.-Functions--p. 804!!!], respectively of Euler-numbers/functions, (via Riemann zeta-function(domination of quantum-statistics: [Pathria, Statistical-Mechanics; Huang, Statistical-Mechanics]) VS. Bernoulli-numbers/ functions. Much can be learned about statistical-physics from Euler-numbers/functions via Riemann zeta-function(s) VS. Bernoulli-numbers/functions [Conway-Guy, Book of Numbers] and about Euler-numbers/functions, via Riemann zeta-function(s) MORPHISM, VS. Bernoulli-numbers/ functions, visa versa!!! Ex.: Riemann-hypothesis PHYSICS proof PARTLY as BEQS BEC/BEA!!!
NASA Astrophysics Data System (ADS)
Ma, Lin; Wang, Kexin; Xu, Zuhua; Shao, Zhijiang; Song, Zhengyu; Biegler, Lorenz T.
2018-05-01
This study presents a trajectory optimization framework for lunar rover performing vertical takeoff vertical landing (VTVL) maneuvers in the presence of terrain using variable-thrust propulsion. First, a VTVL trajectory optimization problem with three-dimensional kinematics and dynamics model, boundary conditions, and path constraints is formulated. Then, a finite-element approach transcribes the formulated trajectory optimization problem into a nonlinear programming (NLP) problem solved by a highly efficient NLP solver. A homotopy-based backtracking strategy is applied to enhance the convergence in solving the formulated VTVL trajectory optimization problem. The optimal thrust solution typically has a "bang-bang" profile considering that bounds are imposed on the magnitude of engine thrust. An adaptive mesh refinement strategy based on a constant Hamiltonian profile is designed to address the difficulty in locating the breakpoints in the thrust profile. Four scenarios are simulated. Simulation results indicate that the proposed trajectory optimization framework has sufficient adaptability to handle VTVL missions efficiently.
NASA Astrophysics Data System (ADS)
Wang, Yuan; Wu, Rongsheng
2001-12-01
Theoretical argumentation for so-called suitable spatial condition is conducted by the aid of homotopy framework to demonstrate that the proposed boundary condition does guarantee that the over-specification boundary condition resulting from an adjoint model on a limited-area is no longer an issue, and yet preserve its well-poseness and optimal character in the boundary setting. The ill-poseness of over-specified spatial boundary condition is in a sense, inevitable from an adjoint model since data assimilation processes have to adapt prescribed observations that used to be over-specified at the spatial boundaries of the modeling domain. In the view of pragmatic implement, the theoretical framework of our proposed condition for spatial boundaries indeed can be reduced to the hybrid formulation of nudging filter, radiation condition taking account of ambient forcing, together with Dirichlet kind of compatible boundary condition to the observations prescribed in data assimilation procedure. All of these treatments, no doubt, are very familiar to mesoscale modelers.
NASA Astrophysics Data System (ADS)
Ramana Reddy, J. V.; Srikanth, D.; Das, Samir K.
2017-08-01
A couple stress fluid model with the suspension of silver nanoparticles is proposed in order to investigate theoretically the natural convection of temperature and concentration. In particular, the flow is considered in an artery with an obstruction wherein the rheology of blood is taken as a couple stress fluid. The effects of the permeability of the stenosis and the treatment procedure involving a catheter are also considered in the model. The obtained non-linear momentum, temperature and concentration equations are solved using the homotopy perturbation method. Nanoparticles and the two viscosities of the couple stress fluid seem to play a significant role in the flow regime. The pressure drop, flow rate, resistance to the fluid flow and shear stress are computed and their effects are analyzed with respect to various fluids and geometric parameters. Convergence of the temperature and its dependency on the degree of deformation is effectively depicted. It is observed that the Nusselt number increases as the volume fraction increases. Hence magnification of molecular thermal dispersion can be achieved by increasing the nanoparticle concentration. It is also observed that concentration dispersion is greater for severe stenosis and it is maximum at the first extrema. The secondary flow of the axial velocity in the stenotic region is observed and is asymmetric in the tapered artery. The obtained results can be utilized in understanding the increase in heat transfer and enhancement of mass dispersion, which could be used for drug delivery in the treatment of stenotic conditions.
NASA Astrophysics Data System (ADS)
Monaco, Domenico; Tauber, Clément
2017-07-01
We establish a connection between two recently proposed approaches to the understanding of the geometric origin of the Fu-Kane-Mele invariant FKM\\in Z_2, arising in the context of two-dimensional time-reversal symmetric topological insulators. On the one hand, the Z_2 invariant can be formulated in terms of the Berry connection and the Berry curvature of the Bloch bundle of occupied states over the Brillouin torus. On the other, using techniques from the theory of bundle gerbes, it is possible to provide an expression for FKM containing the square root of the Wess-Zumino amplitude for a certain U( N)-valued field over the Brillouin torus. We link the two formulas by showing directly the equality between the above-mentioned Wess-Zumino amplitude and the Berry phase, as well as between their square roots. An essential tool of independent interest is an equivariant version of the adjoint Polyakov-Wiegmann formula for fields T^2 → U(N), of which we provide a proof employing only basic homotopy theory and circumventing the language of bundle gerbes.
Fractal properties of background noise and target signal enhancement using CSEM data
NASA Astrophysics Data System (ADS)
Benavides, Alfonso; Everett, Mark E.; Pierce, Carl; Nguyen, Cam
2003-09-01
Controlled-source electromagnetic (CSEM) spatial profiles and 2-D conductivity maps were obtained on the Brazos Valley, TX floodplain to study the fractal statistics of geological signals and effects of man-made conductive targets using Geonics EM34, EM31 and EM63. Using target-free areas, a consistent power-law power spectrum (|A(k)| ~ k ^-β) for the profiles was found with β values typical of fractional Brownian motion (fBm). This means that the spatial variation of conductivity does not correspond to Gaussian statistics, where there are spatial correlations at different scales. The presence of targets tends to flatten the power-law power spectrum (PS) at small wavenumbers. Detection and localization of targets can be achieved using short-time Fourier transform (STFT). The presence of targets is enhanced because the signal energy is spread to higher wavenumbers (small scale numbers) in the positions occupied by the targets. In the case of poor spatial sampling or small amount of data, the information available from the power spectrum is not enough to separate spatial correlations from target signatures. Advantages are gained by using the spatial correlations of the fBm in order to reject the background response, and to enhance the signals from highly conductive targets. This approach was tested for the EM31 using a pre-processing step that combines apparent conductivity readings from two perpendicular transmitter-receiver orientations at each station. The response obtained using time-domain CSEM is influence to a lesser degree by geological noise and the target response can be processed to recover target features. The homotopy method is proposed to solve the inverse problem using a set of possible target models and a dynamic library of responses used to optimize the starting model.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Topological T-duality, automorphisms and classifying spaces
NASA Astrophysics Data System (ADS)
Pande, Ashwin S.
2014-08-01
We extend the formalism of Topological T-duality to spaces which are the total space of a principal S1-bundle p:E→W with an H-flux in H3(E,Z) together with an automorphism of the continuous-trace algebra on E determined by H. The automorphism is a ‘topological approximation’ to a gerby gauge transformation of spacetime. We motivate this physically from Buscher’s Rules for T-duality. Using the Equivariant Brauer Group, we connect this problem to the C∗-algebraic formalism of Topological T-duality of Mathai and Rosenberg (2005). We show that the study of this problem leads to the study of a purely topological problem, namely, Topological T-duality of triples (p,b,H) consisting of isomorphism classes of a principal circle bundle p:X→B and classes b∈H2(X,Z) and H∈H3(X,Z). We construct a classifying space R for triples in a manner similar to the work of Bunke and Schick (2005). We characterize R up to homotopy and study some of its properties. We show that it possesses a natural self-map which induces T-duality for triples. We study some properties of this map.
Proceedings of the International Symposium on Topological Aspects of Critical Systems and Networks
NASA Astrophysics Data System (ADS)
Yakubo, Kousuke; Amitsuka, Hiroshi; Ishikawa, Goo; Machino, Kazuo; Nakagaki, Toshiyuki; Tanda, Satoshi; Yamada, Hideto; Kichiji, Nozomi
2007-07-01
I. General properties of networks. Physics of network security / Y.-C. Lai, X. Wand and C. H. Lai. Multi-state interacting particle systems on scale-free networks / N. Masuda and N. Konno. Homotopy Reduction of Complex Networks 18 / Y. Hiraoka and T. Ichinomiya. Analysis of the Susceptible-Infected-Susceptible Model on Complex Network / T. Ichinomiya -- II. Complexity in social science. Innovation and Development in a Random Lattice / J. Lahtinen. Long-tailed distributions in biological systems: revisit to Lognormals / N. Kobayashi ... [et al.]. Two-class structure of income distribution in the USA:exponential bulk and power-law tail / V. M. Yakovenko and A. Christian Silva. Power Law distributions in two community currencies / N. Kichiji and M. Nishibe -- III. Patterns in biological objects. Stoichiometric network analysis of nonlinear phenomena in rection mechanism for TWC converters / M. Marek ... [et al.]. Collective movement and morphogenesis of epithelial cells / H. Haga and K. Kawabata. Indecisive behavior of amoeba crossing an environmental barrier / S. Takagi ... [et al.]. Effects of amount of food on path selection in the transport network of an amoeboid organism / T. Nakagaki ... [et al.]. Light scattering study in double network gels / M. Fukunaya ... [et al.].Blood flow velocity in the choroid in punctate inner choroidopathy and Vogt-Koyanagi-Harada disease; amd multifractal analysis of choroidal blood flow in age-related macular degeneration / K. Yoshida ... [et al.]. Topological analysis of placental arteries: correlation with neonatal growth / H. Yamada and K. Yakubo -- IV. Criticality in pure and applied physics. Droplets in Disordered Metallic Quantum Critical Systems / A. H. Castro Neto and B. A. Jones. Importance of static disorder and inhomogeneous cooperative dynamics in heavy-fermion metals / O. O. Bernal. Competition between spin glass and Antiferromagnetic phases in heavy fermion materials / S. Sullow. Emergent Phases via Fermi surface reconstruction near the metamagnetic quantum critical point in U (RU1-xRhx)2Si2 / K. H. Kim ... [et al.]. Continuous Evolution of the Fermi Surface of CeRu2Si2 across the metamagnetic transition / R. Daou, C. Bergemann and S. R. Julian. Phase transition between the itinerant and the localized f-electron states in heavy fermion antiferromagnet Ce(Ru0.9Rh0.1)2(Si1-yGey) / Relation between magnetism and metal-Insulator transition in Mn-doped SrRuO3 / M. Yokoyama ... [et al.]. Magnetization study of pairing and Vortex states in Sr2RuO4 / K. Tenya ... [et al.]. Single-site effects of Pr ions doped in ThRu2Si2 / A. Morishita ... [et al.]. / A. Morishita ... [et al.]. 51V-NMR studies of Heisenberg Triangular System V15 Cluster / Y. Furukawa ... [et al.]Menger sponge-like fractal body created with a designed template method / H. Mayama and K. Tsujii. Nonlinear lattice relaxation mechanism for photoexcited dimetal-hallide chain compounds / J.Ohara and S. Yamamoto. Real space renormalization group analysis with the replica method for the two-dimensional ising spin glass / T. Hasegawa and K. Nemoto. Quantum Network models and their symmetry properties / T. Ohtsuki and K. M. Slevin. Fractality of critical percolation networks / M. Mitobe and K. Yakubo. Ising phase transition on curved surfaces / Y. Sakaniwa, I. Hasegawa and H. Shima. Quantum confinement in deformed cylindrical surfaces / H. Taira and H. Shima. Topological spin currents due to nonadiabatic quantum pumping / K. Yakubo and M. Morikawa. Charge density wave state in topological crystal / T. Nogawa and K. Nemoto. Spatiotemporal mapping of symmetrical surface acoustic fields on crystals and periodic microstructures / T. Tachizaki ... [et al.]. Clean optical vortex beam generation for large topological charge / J. Hamazaki, Y. Mineta and R. Morita. Spherically symmetric Black Hole in a topological universe: a toy model / K. Konno ... [et al.].
Intelligence-related differences in the asymmetry of spontaneous cerebral activity.
Santarnecchi, Emiliano; Tatti, Elisa; Rossi, Simone; Serino, Vinicio; Rossi, Alessandro
2015-09-01
Recent evidence suggests the spontaneous BOLD signal synchronization of corresponding interhemispheric, homotopic regions as a stable trait of human brain physiology, with emerging differences in such organization being also related to some pathological conditions. To understand whether such brain functional symmetries play a role into higher-order cognitive functioning, here we correlated the functional homotopy profiles of 119 healthy subjects with their intelligence level. Counterintuitively, reduced homotopic connectivity in above average-IQ versus average-IQ subjects was observed, with significant reductions in visual and somatosensory cortices, supplementary motor area, rolandic operculum, and middle temporal gyrus, possibly suggesting that a downgrading of interhemispheric talk at rest could be associated with higher cognitive functioning. These regions also showed an increased spontaneous synchrony with medial structures located in ipsi- and contralateral hemispheres, with such pattern being mostly detectable for regions placed in the left hemisphere. The interactions with age and gender have been also tested, with different patterns for subjects above and below 25 years old and less homotopic connectivity in the prefrontal cortex and posterior midline regions in female participants with higher IQ scores. These findings support prior evidence suggesting a functional role for homotopic connectivity in human cognitive expression, promoting the reduction of synchrony between primary sensory regions as a predictor of higher intelligence levels. © 2015 Wiley Periodicals, Inc.
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
Solving the infeasible trust-region problem using approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott
2004-07-01
The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Methods of analysis. 2.19 Section 2.19 Food and... ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis... enforcement programs to utilize the methods of analysis of the AOAC INTERNATIONAL (AOAC) as published in the...
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Methods of analysis. 2.19 Section 2.19 Food and... ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis... enforcement programs to utilize the methods of analysis of the AOAC INTERNATIONAL (AOAC) as published in the...
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2012-05-29
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2011-09-27
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
Computational methods for global/local analysis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Methods of analysis. 163.5 Section 163.5 Food and... CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in cacao products shall be determined by the following methods of analysis prescribed in “Official Methods...
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana
2014-01-01
Objective To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. Conclusions The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil. PMID:25182282
A Multidimensional Analysis Tool for Visualizing Online Interactions
ERIC Educational Resources Information Center
Kim, Minjeong; Lee, Eunchul
2012-01-01
This study proposes and verifies the performance of an analysis tool for visualizing online interactions. A review of the most widely used methods for analyzing online interactions, including quantitative analysis, content analysis, and social network analysis methods, indicates these analysis methods have some limitations resulting from their…
Methods for Determining Particle Size Distributions from Nuclear Detonations.
1987-03-01
Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Validity and consistency assessment of accident analysis methods in the petroleum industry.
Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza
2017-11-17
Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Methods of analysis. 2.19 Section 2.19 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL GENERAL ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis...
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Methods of analysis. 2.19 Section 2.19 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL GENERAL ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis...
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Methods of analysis. 2.19 Section 2.19 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL GENERAL ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis...
Jiang, Wei; Yu, Weichuan
2017-02-15
In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Global/local methods research using a common structural analysis framework
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.
1991-01-01
Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Waszak, M. R.; Schmidt, D. S.
1985-01-01
As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Methods of analysis. 133.5 Section 133.5 Food and... CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture, milkfat, and phosphatase levels in cheeses will be determined by the following methods of analysis from...
7 CFR 58.812 - Methods of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Methods of sample analysis. 58.812 Section 58.812... Procedures § 58.812 Methods of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL, as issued by the USDA, Agricultural...
7 CFR 58.245 - Method of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Method of sample analysis. 58.245 Section 58.245... Procedures § 58.245 Method of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL as issued by the USDA, Agricultural Marketing...
Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana
2014-02-01
To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.
Who's in and why? A typology of stakeholder analysis methods for natural resource management.
Reed, Mark S; Graves, Anil; Dandy, Norman; Posthumus, Helena; Hubacek, Klaus; Morris, Joe; Prell, Christina; Quinn, Claire H; Stringer, Lindsay C
2009-04-01
Stakeholder analysis means many things to different people. Various methods and approaches have been developed in different fields for different purposes, leading to confusion over the concept and practice of stakeholder analysis. This paper asks how and why stakeholder analysis should be conducted for participatory natural resource management research. This is achieved by reviewing the development of stakeholder analysis in business management, development and natural resource management. The normative and instrumental theoretical basis for stakeholder analysis is discussed, and a stakeholder analysis typology is proposed. This consists of methods for: i) identifying stakeholders; ii) differentiating between and categorising stakeholders; and iii) investigating relationships between stakeholders. The range of methods that can be used to carry out each type of analysis is reviewed. These methods and approaches are then illustrated through a series of case studies funded through the Rural Economy and Land Use (RELU) programme. These case studies show the wide range of participatory and non-participatory methods that can be used, and discuss some of the challenges and limitations of existing methods for stakeholder analysis. The case studies also propose new tools and combinations of methods that can more effectively identify and categorise stakeholders and help understand their inter-relationships.
Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Rapid Radiochemical Method for Radium-226 in Building ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Radium-226 in building materials Method Selected for: SAM lists this method for qualitative analysis of radium-226 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
Rapid Radiochemical Method for Americium-241 in Building ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Americium-241 in building materials Method Selected for: SAM lists this method for qualitative analysis of americium-241 in concrete or brick building materials. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
Adjusting for multiple prognostic factors in the analysis of randomised trials
2013-01-01
Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993
Multivariate Methods for Meta-Analysis of Genetic Association Studies.
Dimou, Niki L; Pantavou, Katerina G; Braliou, Georgia G; Bagos, Pantelis G
2018-01-01
Multivariate meta-analysis of genetic association studies and genome-wide association studies has received a remarkable attention as it improves the precision of the analysis. Here, we review, summarize and present in a unified framework methods for multivariate meta-analysis of genetic association studies and genome-wide association studies. Starting with the statistical methods used for robust analysis and genetic model selection, we present in brief univariate methods for meta-analysis and we then scrutinize multivariate methodologies. Multivariate models of meta-analysis for a single gene-disease association studies, including models for haplotype association studies, multiple linked polymorphisms and multiple outcomes are discussed. The popular Mendelian randomization approach and special cases of meta-analysis addressing issues such as the assumption of the mode of inheritance, deviation from Hardy-Weinberg Equilibrium and gene-environment interactions are also presented. All available methods are enriched with practical applications and methodologies that could be developed in the future are discussed. Links for all available software implementing multivariate meta-analysis methods are also provided.
Influence analysis in quantitative trait loci detection.
Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko
2014-07-01
This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi
2013-02-01
A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.
Rapid Radiochemical Method for Total Radiostrontium (Sr-90) ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Beta counting Method Developed for: Strontium-89 and strontium-90 in building materials Method Selected for: SAM lists this method for qualitative analysis of strontium-89 and strontium-90 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
ERIC Educational Resources Information Center
Glass, Gene V.; And Others
Integrative analysis, or what is coming to be known as meta-analysis, is the integration of the findings of many empirical research studies of a topic. Meta-analysis differs from traditional narrative forms of research reviewing in that it is more quantitative and statistical. Thus, the methods of meta-analysis are merely statistical methods,…
Prognostic Analysis System and Methods of Operation
NASA Technical Reports Server (NTRS)
MacKey, Ryan M. E. (Inventor); Sneddon, Robert (Inventor)
2014-01-01
A prognostic analysis system and methods of operating the system are provided. In particular, a prognostic analysis system for the analysis of physical system health applicable to mechanical, electrical, chemical and optical systems and methods of operating the system are described herein.
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 33rd Session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... the criteria appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 32nd session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for Codex with other...
Trojanowicz, Marek; Kolacinska, Kamila; Grate, Jay W.
2018-02-13
Here, the safety and security of nuclear power plant operations depend on the application of the most appropriate techniques and methods of chemical analysis, where modern flow analysis methods prevail. Nevertheless, the current status of the development of these methods is more limited than it might be expected based on their genuine advantages. The main aim of this paper is to review the automated flow analysis procedures developed with various detection methods for the nuclear energy industry. The flow analysis methods for the determination of radionuclides, that have been reported to date, are primarily focused on their environmental applications. Themore » benefits of the application of flow methods in both monitoring of the nuclear wastes and process analysis of the primary circuit coolants of light water nuclear reactors will also be discussed. The application of either continuous flow methods (CFA) or injection methods (FIA, SIA) of the flow analysis with the β–radiometric detection shortens the analysis time and improves the precision of determination due to mechanization of certain time-consuming operations of the sample processing. Compared to the radiometric detection, the mass spectrometry (MS) detection enables one to perform multicomponent analyses as well as the determination of transuranic isotopes with much better limits of detection.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trojanowicz, Marek; Kolacinska, Kamila; Grate, Jay W.
Here, the safety and security of nuclear power plant operations depend on the application of the most appropriate techniques and methods of chemical analysis, where modern flow analysis methods prevail. Nevertheless, the current status of the development of these methods is more limited than it might be expected based on their genuine advantages. The main aim of this paper is to review the automated flow analysis procedures developed with various detection methods for the nuclear energy industry. The flow analysis methods for the determination of radionuclides, that have been reported to date, are primarily focused on their environmental applications. Themore » benefits of the application of flow methods in both monitoring of the nuclear wastes and process analysis of the primary circuit coolants of light water nuclear reactors will also be discussed. The application of either continuous flow methods (CFA) or injection methods (FIA, SIA) of the flow analysis with the β–radiometric detection shortens the analysis time and improves the precision of determination due to mechanization of certain time-consuming operations of the sample processing. Compared to the radiometric detection, the mass spectrometry (MS) detection enables one to perform multicomponent analyses as well as the determination of transuranic isotopes with much better limits of detection.« less
Trojanowicz, Marek; Kołacińska, Kamila; Grate, Jay W
2018-06-01
The safety and security of nuclear power plant operations depend on the application of the most appropriate techniques and methods of chemical analysis, where modern flow analysis methods prevail. Nevertheless, the current status of the development of these methods is more limited than it might be expected based on their genuine advantages. The main aim of this paper is to review the automated flow analysis procedures developed with various detection methods for the nuclear energy industry. The flow analysis methods for the determination of radionuclides, that have been reported to date, are primarily focused on their environmental applications. The benefits of the application of flow methods in both monitoring of the nuclear wastes and process analysis of the primary circuit coolants of light water nuclear reactors will also be discussed. The application of either continuous flow methods (CFA) or injection methods (FIA, SIA) of the flow analysis with the β-radiometric detection shortens the analysis time and improves the precision of determination due to mechanization of certain time-consuming operations of the sample processing. Compared to the radiometric detection, the mass spectrometry (MS) detection enables one to perform multicomponent analyses as well as the determination of transuranic isotopes with much better limits of detection. Copyright © 2018 Elsevier B.V. All rights reserved.
Global/local methods research using the CSM testbed
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. Hayden, Jr.; Thompson, Danniella M.
1990-01-01
Research activities in global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.
2001-10-25
Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for
Waste Analysis Plan and Waste Characterization Survey, Barksdale AFB, Louisiana
1991-03-01
review to assess if analysis is needed, any analyses that are to be provided by generators, and methods to be used to meet specific waste analysis ...sampling method , sampling frequency, parameters of analysis , SW 846 test methods , Department of Transportation (DOT) shipping name and hazard class...S.e.iceA w/Atchs 2. HQ SAC/DEV Ltr, 28 Sep 90 19 119 APPENDIX B Waste Analysis Plan Rationale 21 APPENDIX B 1. SAMPLING METHOD RATIONALE: Composite Liquid
Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD
NASA Astrophysics Data System (ADS)
Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao
Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.
Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier
2013-01-01
To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Nilsson, Björn; Håkansson, Petra; Johansson, Mikael; Nelander, Sven; Fioretos, Thoas
2007-01-01
Ontological analysis facilitates the interpretation of microarray data. Here we describe new ontological analysis methods which, unlike existing approaches, are threshold-free and statistically powerful. We perform extensive evaluations and introduce a new concept, detection spectra, to characterize methods. We show that different ontological analysis methods exhibit distinct detection spectra, and that it is critical to account for this diversity. Our results argue strongly against the continued use of existing methods, and provide directions towards an enhanced approach. PMID:17488501
[Enzymatic analysis of the quality of foodstuffs].
Kolesnov, A Iu
1997-01-01
Enzymatic analysis is an independent and separate branch of enzymology and analytical chemistry. It has become one of the most important methodologies used in food analysis. Enzymatic analysis allows the quick, reliable determination of many food ingredients. Often these contents cannot be determined by conventional methods, or if methods are available, they are determined only with limited accuracy. Today, methods of enzymatic analysis are being increasingly used in the investigation of foodstuffs. Enzymatic measurement techniques are used in industry, scientific and food inspection laboratories for quality analysis. This article describes the requirements of an optimal analytical method: specificity, sample preparation, assay performance, precision, sensitivity, time requirement, analysis cost, safety of reagents.
Rapid Radiochemical Method for Isotopic Uranium in Building ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Uranium-234, uranium-235, and uranium-238 in concrete and brick samples Method Selected for: SAM lists this method for qualitative analysis of uranium-234, uranium-235, and uranium-238 in concrete or brick building materials. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
Methods for Conducting Cognitive Task Analysis for a Decision Making Task.
1996-01-01
Cognitive task analysis (CTA) improves traditional task analysis procedures by analyzing the thought processes of performers while they complete a...for using these methods to conduct a CTA for domains which involve critical decision making tasks in naturalistic settings. The cognitive task analysis methods
A Method for Cognitive Task Analysis
1992-07-01
A method for cognitive task analysis is described based on the notion of ’generic tasks’. The method distinguishes three layers of analysis. At the...model for applied areas such as the development of knowledge-based systems and training, are discussed. Problem solving, Cognitive Task Analysis , Knowledge, Strategies.
Comparison of variance estimators for meta-analysis of instrumental variable estimates
Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F
2016-01-01
Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262
2006-01-01
ENVIRONMENTAL ANALYSIS Analysis of Explosives in Soil Using Solid Phase Microextraction and Gas Chromatography Howard T. Mayfield Air Force Research...Abstract: Current methods for the analysis of explosives in soils utilize time consuming sample preparation workups and extractions. The method detection...chromatography/mass spectrometry to provide a con- venient and sensitive analysis method for explosives in soil. Keywords: Explosives, TNT, solid phase
Nakagawa, Hiroko; Yuno, Tomoji; Itho, Kiichi
2009-03-01
Recently, specific detection method for Bacteria, by flow cytometry method using nucleic acid staining, was developed as a function of automated urine formed elements analyzer for routine urine testing. Here, we performed a basic study on this bacteria analysis method. In addition, we also have a comparison among urine sediment analysis, urine Gram staining and urine quantitative cultivation, the conventional methods performed up to now. As a result, the bacteria analysis with flow cytometry method that uses nucleic acid staining was excellent in reproducibility, and higher sensitivity compared with microscopic urinary sediment analysis. Based on the ROC curve analysis, which settled urine culture method as standard, cut-off level of 120/microL was defined and its sensitivity = 85.7%, specificity = 88.2%. In the analysis of scattergram, accompanied with urine culture method, among 90% of rod positive samples, 80% of dots were appeared in the area of 30 degrees from axis X. In addition, one case even indicated that analysis of bacteria by flow cytometry and scattergram of time series analysis might be helpful to trace the progress of causative bacteria therefore the information supposed to be clinically significant. Reporting bacteria information with nucleic acid staining flow cytometry method is expected to contribute to a rapid diagnostics and treatment of urinary tract infections. Besides, the contribution to screening examination of microbiology and clinical chemistry, will deliver a more efficient solution to urine analysis.
Finnveden, Göran; Björklund, Anna; Moberg, Asa; Ekvall, Tomas
2007-06-01
A large number of methods and approaches that can be used for supporting waste management decisions at different levels in society have been developed. In this paper an overview of methods is provided and preliminary guidelines for the choice of methods are presented. The methods introduced include: Environmental Impact Assessment, Strategic Environmental Assessment, Life Cycle Assessment, Cost-Benefit Analysis, Cost-effectiveness Analysis, Life-cycle Costing, Risk Assessment, Material Flow Accounting, Substance Flow Analysis, Energy Analysis, Exergy Analysis, Entropy Analysis, Environmental Management Systems, and Environmental Auditing. The characteristics used are the types of impacts included, the objects under study and whether the method is procedural or analytical. The different methods can be described as systems analysis methods. Waste management systems thinking is receiving increasing attention. This is, for example, evidenced by the suggested thematic strategy on waste by the European Commission where life-cycle analysis and life-cycle thinking get prominent positions. Indeed, life-cycle analyses have been shown to provide policy-relevant and consistent results. However, it is also clear that the studies will always be open to criticism since they are simplifications of reality and include uncertainties. This is something all systems analysis methods have in common. Assumptions can be challenged and it may be difficult to generalize from case studies to policies. This suggests that if decisions are going to be made, they are likely to be made on a less than perfect basis.
Analysis of the principal component algorithm in phase-shifting interferometry.
Vargas, J; Quiroga, J Antonio; Belenguer, T
2011-06-15
We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Code of Federal Regulations, 2012 CFR
2012-01-01
... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...
Code of Federal Regulations, 2014 CFR
2014-01-01
... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...
Code of Federal Regulations, 2013 CFR
2013-01-01
... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...
ERIC Educational Resources Information Center
Çokluk, Ömay; Koçak, Duygu
2016-01-01
In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…
Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods
2012-06-01
ANALYSIS OF MULTI-RATE PARTITIONED RUNGE-KUTTA METHODS by Patrick R. Mugg June 2012 Thesis Advisor: Francis Giraldo Second Reader: Hong...COVERED Master’s Thesis 4. TITLE AND SUBTITLE Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods 5. FUNDING NUMBERS 6. AUTHOR...The most widely known and used procedure for analyzing stability is the Von Neumann Method , such that Von Neumann’s stability analysis looks at
Relative contributions of three descriptive methods: implications for behavioral assessment.
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods-the ABC method, the conditional probability method, and the conditional and background probability method-to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations.
NASA Astrophysics Data System (ADS)
Zabolotna, Natalia I.; Radchenko, Kostiantyn O.; Karas, Oleksandr V.
2018-01-01
A fibroadenoma diagnosing of breast using statistical analysis (determination and analysis of statistical moments of the 1st-4th order) of the obtained polarization images of Jones matrix imaginary elements of the optically thin (attenuation coefficient τ <= 0,1 ) blood plasma films with further intellectual differentiation based on the method of "fuzzy" logic and discriminant analysis were proposed. The accuracy of the intellectual differentiation of blood plasma samples to the "norm" and "fibroadenoma" of breast was 82.7% by the method of linear discriminant analysis, and by the "fuzzy" logic method is 95.3%. The obtained results allow to confirm the potentially high level of reliability of the method of differentiation by "fuzzy" analysis.
Estimation of low back moments from video analysis: a validation study.
Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H
2011-09-02
This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.
Text analysis methods, text analysis apparatuses, and articles of manufacture
Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M
2014-10-28
Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.
Robitaille, Line; Hoffer, L John
2016-04-21
In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.
Bayesian data analysis in population ecology: motivations, methods, and benefits
Dorazio, Robert
2016-01-01
During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Division Algebras, Supersymmetry and Higher Gauge Theory
NASA Astrophysics Data System (ADS)
Huerta, John Gmerek
2011-12-01
Starting from the four normed division algebras---the real numbers, complex numbers, quaternions and octonions, with dimensions k = 1, 2, 4 and 8, respectively---a systematic procedure gives a 3-cocycle on the Poincare Lie superalgebra in dimensions k + 2 = 3, 4, 6 and 10. A related procedure gives a 4-cocycle on the Poincare Lie superalgebra in dimensions k+3 = 4, 5, 7 and 11. The existence of these cocycles follow from certain spinor identities that hold only in these dimensions, and which are closely related to the existence of superstring and super-Yang--Mills theory in dimensions k + 2, and super-2-brane theory in dimensions k + 3. In general, an (n+1)-cocycle on a Lie superalgebra yields a 'Lie n-superalgebra': that is, roughly speaking, an n-term chain complex equipped with a bracket satisfying the axioms of a Lie superalgebra up to chain homotopy. We thus obtain Lie 2-superalgebras extending the Poincare superalgebra in dimensions 3, 4, 6, and 10, and Lie 3-superalgebras extending the Poincare superalgebra in dimensions 4, 5, 7 and 11. As shown in Sati, Schreiber and Stasheff's work on generalized connections valued in Lie n-superalgebras, Lie 2-superalgebra connections describe the parallel transport of strings, while Lie 3-superalgebra connections describe the parallel transport of 2-branes. Moreover, in the octonionic case, these connections concisely summarize the fields appearing in 10- and 11-dimensional supergravity. Generically, integrating a Lie n-superalgebra to a Lie n-supergroup yields a 'Lie n-supergroup' that is hugely infinite-dimensional. However, when the Lie n-superalgebra is obtained from an (n + 1)-cocycle on a nilpotent Lie superalgebra, there is a geometric procedure to integrate the cocycle to one on the corresponding nilpotent Lie supergroup. In general, a smooth (n+1)-cocycle on a supergroup yields a 'Lie n-supergroup': that is, a weak n-group internal to supermanifolds. Using our geometric procedure to integrate the 3-cocycle in dimensions 3, 4, 6 and 10, we obtain a Lie 2-supergroup extending the Poincare supergroup in those dimensions, and similarly integrating the 4-cocycle in dimensions 4, 5, 7 and 11, we obtain a Lie 3-supergroup extending the Poincare supergroup in those dimensions.
Three-dimensional Stress Analysis Using the Boundary Element Method
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Banerjee, P. K.
1984-01-01
The boundary element method is to be extended (as part of the NASA Inelastic Analysis Methods program) to the three-dimensional stress analysis of gas turbine engine hot section components. The analytical basis of the method (as developed in elasticity) is outlined, its numerical implementation is summarized, and the approaches to be followed in extending the method to include inelastic material response indicated.
Quad-Tree Visual-Calculus Analysis of Satellite Coverage
NASA Technical Reports Server (NTRS)
Lo, Martin W.; Hockney, George; Kwan, Bruce
2003-01-01
An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.
2004-08-01
ethnography , phenomenological study , grounded theory study and content analysis. THE HISTORICAL METHOD Methods I. Qualitative Research Methods ... Phenomenological Study 4. Grounded Theory Study 5. Content Analysis II. Quantitative Research Methods A...A. The Historical Method B. General Qualitative
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Systematic text condensation: a strategy for qualitative analysis.
Malterud, Kirsti
2012-12-01
To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies. Giorgi's psychological phenomenological analysis is the point of departure and inspiration for systematic text condensation. The basic elements of Giorgi's method and the elaboration of these in systematic text condensation are presented, followed by a detailed description of procedures for analysis according to systematic text condensation. Finally, similarities and differences compared with other frequently applied methods for qualitative analysis are identified, as the foundation of a discussion of strengths and limitations of systematic text condensation. Systematic text condensation is a descriptive and explorative method for thematic cross-case analysis of different types of qualitative data, such as interview studies, observational studies, and analysis of written texts. The method represents a pragmatic approach, although inspired by phenomenological ideas, and various theoretical frameworks can be applied. The procedure consists of the following steps: 1) total impression - from chaos to themes; 2) identifying and sorting meaning units - from themes to codes; 3) condensation - from code to meaning; 4) synthesizing - from condensation to descriptions and concepts. Similarities and differences comparing systematic text condensation with other frequently applied qualitative methods regarding thematic analysis, theoretical methodological framework, analysis procedures, and taxonomy are discussed. Systematic text condensation is a strategy for analysis developed from traditions shared by most of the methods for analysis of qualitative data. The method offers the novice researcher a process of intersubjectivity, reflexivity, and feasibility, while maintaining a responsible level of methodological rigour.
Further evidence for the increased power of LOD scores compared with nonparametric methods.
Durner, M; Vieland, V J; Greenberg, D A
1999-01-01
In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.
Static aeroelastic analysis and tailoring of a single-element racing car wing
NASA Astrophysics Data System (ADS)
Sadd, Christopher James
This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.
Comparative analysis of methods and sources of financing of the transport organizations activity
NASA Astrophysics Data System (ADS)
Gorshkov, Roman
2017-10-01
The article considers the analysis of methods of financing of transport organizations in conditions of limited investment resources. A comparative analysis of these methods is carried out, the classification of investment, methods and sources of financial support for projects being implemented to date are presented. In order to select the optimal sources of financing for the projects, various methods of financial management and financial support for the activities of the transport organization were analyzed, which were considered from the perspective of analysis of advantages and limitations. The result of the study is recommendations on the selection of optimal sources and methods of financing of transport organizations.
Comparison of histomorphometrical data obtained with two different image analysis methods.
Ballerini, Lucia; Franke-Stenport, Victoria; Borgefors, Gunilla; Johansson, Carina B
2007-08-01
A common way to determine tissue acceptance of biomaterials is to perform histomorphometrical analysis on histologically stained sections from retrieved samples with surrounding tissue, using various methods. The "time and money consuming" methods and techniques used are often "in house standards". We address light microscopic investigations of bone tissue reactions on un-decalcified cut and ground sections of threaded implants. In order to screen sections and generate results faster, the aim of this pilot project was to compare results generated with the in-house standard visual image analysis tool (i.e., quantifications and judgements done by the naked eye) with a custom made automatic image analysis program. The histomorphometrical bone area measurements revealed no significant differences between the methods but the results of the bony contacts varied significantly. The raw results were in relative agreement, i.e., the values from the two methods were proportional to each other: low bony contact values in the visual method corresponded to low values with the automatic method. With similar resolution images and further improvements of the automatic method this difference should become insignificant. A great advantage using the new automatic image analysis method is that it is time saving--analysis time can be significantly reduced.
Thermal image analysis using the serpentine method
NASA Astrophysics Data System (ADS)
Koprowski, Robert; Wilczyński, Sławomir
2018-03-01
Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.
Systems and methods for detection of blowout precursors in combustors
Lieuwen, Tim C.; Nair, Suraj
2006-08-15
The present invention comprises systems and methods for detecting flame blowout precursors in combustors. The blowout precursor detection system comprises a combustor, a pressure measuring device, and blowout precursor detection unit. A combustion controller may also be used to control combustor parameters. The methods of the present invention comprise receiving pressure data measured by an acoustic pressure measuring device, performing one or a combination of spectral analysis, statistical analysis, and wavelet analysis on received pressure data, and determining the existence of a blowout precursor based on such analyses. The spectral analysis, statistical analysis, and wavelet analysis further comprise their respective sub-methods to determine the existence of blowout precursors.
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
Huang, Yichun; Ding, Weiwei; Zhang, Zhuomin; Li, Gongke
2013-07-01
This paper summarizes the recent developments of the rapid detection methods for food security, such as sensors, optical techniques, portable spectral analysis, enzyme-linked immunosorbent assay, portable gas chromatograph, etc. Additionally, the applications of these rapid detection methods coupled with sample pretreatment techniques in real food security analysis are reviewed. The coupling technique has the potential to provide references to establish the selective, precise and quantitative rapid detection methods in food security analysis.
Analysis and application of Fourier transform spectroscopy in atmospheric remote sensing
NASA Technical Reports Server (NTRS)
Park, J. H.
1984-01-01
An analysis method for Fourier transform spectroscopy is summarized with applications to various types of distortion in atmospheric absorption spectra. This analysis method includes the fast Fourier transform method for simulating the interferometric spectrum and the nonlinear least-squares method for retrieving the information from a measured spectrum. It is shown that spectral distortions can be simulated quite well and that the correct information can be retrieved from a distorted spectrum by this analysis technique.
Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P; Cowell, Andrew J; Gregory, Michelle L
Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture are described according to some aspects. In one aspect, a hypothesis analysis method includes providing a hypothesis, providing an indicator which at least one of supports and refutes the hypothesis, using the indicator, associating evidence with the hypothesis, weighting the association of the evidence with the hypothesis, and using the weighting, providing information regarding the accuracy of the hypothesis.
Rapid Method for Sodium Hydroxide/Sodium Peroxide Fusion ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Plutonium-238 and plutonium-239 in water and air filters Method Selected for: SAM lists this method as a pre-treatment technique supporting analysis of refractory radioisotopic forms of plutonium in drinking water and air filters using the following qualitative techniques: • Rapid methods for acid or fusion digestion • Rapid Radiochemical Method for Plutonium-238 and Plutonium 239/240 in Building Materials for Environmental Remediation Following Radiological Incidents. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
A Case Study of a Mixed Methods Study Engaged in Integrated Data Analysis
ERIC Educational Resources Information Center
Schiazza, Daniela Marie
2013-01-01
The nascent field of mixed methods research has yet to develop a cohesive framework of guidelines and procedures for mixed methods data analysis (Greene, 2008). To support the field's development of analytical frameworks, this case study reflects on the development and implementation of a mixed methods study engaged in integrated data analysis.…
Integrative Analysis of “-Omics” Data Using Penalty Functions
Zhao, Qing; Shi, Xingjie; Huang, Jian; Liu, Jin; Li, Yang; Ma, Shuangge
2014-01-01
In the analysis of omics data, integrative analysis provides an effective way of pooling information across multiple datasets or multiple correlated responses, and can be more effective than single-dataset (response) analysis. Multiple families of integrative analysis methods have been proposed in the literature. The current review focuses on the penalization methods. Special attention is paid to sparse meta-analysis methods that pool summary statistics across datasets, and integrative analysis methods that pool raw data across datasets. We discuss their formulation and rationale. Beyond “standard” penalized selection, we also review contrasted penalization and Laplacian penalization which accommodate finer data structures. The computational aspects, including computational algorithms and tuning parameter selection, are examined. This review concludes with possible limitations and extensions. PMID:25691921
Power flow as a complement to statistical energy analysis and finite element analysis
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1987-01-01
Present methods of analysis of the structural response and the structure-borne transmission of vibrational energy use either finite element (FE) techniques or statistical energy analysis (SEA) methods. The FE methods are a very useful tool at low frequencies where the number of resonances involved in the analysis is rather small. On the other hand SEA methods can predict with acceptable accuracy the response and energy transmission between coupled structures at relatively high frequencies where the structural modal density is high and a statistical approach is the appropriate solution. In the mid-frequency range, a relatively large number of resonances exist which make finite element method too costly. On the other hand SEA methods can only predict an average level form. In this mid-frequency range a possible alternative is to use power flow techniques, where the input and flow of vibrational energy to excited and coupled structural components can be expressed in terms of input and transfer mobilities. This power flow technique can be extended from low to high frequencies and this can be integrated with established FE models at low frequencies and SEA models at high frequencies to form a verification of the method. This method of structural analysis using power flo and mobility methods, and its integration with SEA and FE analysis is applied to the case of two thin beams joined together at right angles.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Application of computational aerodynamics methods to the design and analysis of transport aircraft
NASA Technical Reports Server (NTRS)
Da Costa, A. L.
1978-01-01
The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data
Kittell, David E; Mares, Jesus O; Son, Steven F
2015-04-01
Two time-frequency analysis methods based on the short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used to determine time-resolved detonation velocities with microwave interferometry (MI). The results were directly compared to well-established analysis techniques consisting of a peak-picking routine as well as a phase unwrapping method (i.e., quadrature analysis). The comparison is conducted on experimental data consisting of transient detonation phenomena observed in triaminotrinitrobenzene and ammonium nitrate-urea explosives, representing high and low quality MI signals, respectively. Time-frequency analysis proved much more capable of extracting useful and highly resolved velocity information from low quality signals than the phase unwrapping and peak-picking methods. Additionally, control of the time-frequency methods is mainly constrained to a single parameter which allows for a highly unbiased analysis method to extract velocity information. In contrast, the phase unwrapping technique introduces user based variability while the peak-picking technique does not achieve a highly resolved velocity result. Both STFT and CWT methods are proposed as improved additions to the analysis methods applied to MI detonation experiments, and may be useful in similar applications.
McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin
2007-05-09
A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.
Local Analysis of Shock Capturing Using Discontinuous Galerkin Methodology
NASA Technical Reports Server (NTRS)
Atkins, H. L.
1997-01-01
The compact form of the discontinuous Galerkin method allows for a detailed local analysis of the method in the neighborhood of the shock for a non-linear model problem. Insight gained from the analysis leads to new flux formulas that are stable and that preserve the compactness of the method. Although developed for a model equation, the flux formulas are applicable to systems such as the Euler equations. This article presents the analysis for methods with a degree up to 5. The analysis is accompanied by supporting numerical experiments using Burgers' equation and the Euler equations.
Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul
2016-01-01
Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.
ERIC Educational Resources Information Center
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse
2015-01-01
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Methods for Analysis and Simulation of Ballistic Impact
2017-04-01
ARL-RP-0597 ● Apr 2017 US Army Research Laboratory Methods for Analysis and Simulation of Ballistic Impact by John D Clayton...Laboratory Methods for Analysis and Simulation of Ballistic Impact by John D Clayton Weapons and Materials Research Directorate, ARL...analytical, and numerical methods of ballistics research . Similar lengthy references dealing with pertinent aspects include [8, 9]. In contrast, the
Coformer screening using thermal analysis based on binary phase diagrams.
Yamashita, Hiroyuki; Hirakura, Yutaka; Yuda, Masamichi; Terada, Katsuhide
2014-08-01
The advent of cocrystals has demonstrated a growing need for efficient and comprehensive coformer screening in search of better development forms, including salt forms. Here, we investigated a coformer screening system for salts and cocrystals based on binary phase diagrams using thermal analysis and examined the effectiveness of the method. Indomethacin and tenoxicam were used as models of active pharmaceutical ingredients (APIs). Physical mixtures of an API and 42 kinds of coformers were analyzed using Differential Scanning Calorimetry (DSC) and X-ray DSC. We also conducted coformer screening using a conventional slurry method and compared these results with those from the thermal analysis method and previous studies. Compared with the slurry method, the thermal analysis method was a high-performance screening system, particularly for APIs with low solubility and/or propensity to form solvates. However, this method faced hurdles for screening coformers combined with an API in the presence of kinetic hindrance for salt or cocrystal formation during heating or if there is degradation near the metastable eutectic temperature. The thermal analysis and slurry methods are considered complementary to each other for coformer screening. Feasibility of the thermal analysis method in drug discovery practice is ensured given its small scale and high throughput.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method
NASA Astrophysics Data System (ADS)
Tang, J.
2012-01-01
Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.
Analysis of biomolecular solvation sites by 3D-RISM theory.
Sindhikara, Daniel J; Hirata, Fumio
2013-06-06
We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.
The SNPforID Assay as a Supplementary Method in Kinship and Trace Analysis
Schwark, Thorsten; Meyer, Patrick; Harder, Melanie; Modrow, Jan-Hendrick; von Wurmb-Schwark, Nicole
2012-01-01
Objective Short tandem repeat (STR) analysis using commercial multiplex PCR kits is the method of choice for kinship testing and trace analysis. However, under certain circumstances (deficiency testing, mutations, minute DNA amounts), STRs alone may not suffice. Methods We present a 50-plex single nucleotide polymorphism (SNP) assay based on the SNPs chosen by the SNPforID consortium as an additional method for paternity and for trace analysis. The new assay was applied to selected routine paternity and trace cases from our laboratory. Results and Conclusions Our investigation shows that the new SNP multiplex assay is a valuable method to supplement STR analysis, and is a powerful means to solve complicated genetic analyses. PMID:22851934
The Use of Object-Oriented Analysis Methods in Surety Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.
1999-05-01
Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less
Sampling and analysis of hexavalent chromium during exposure to chromic acid mist and welding fumes.
Blomquist, G; Nilsson, C A; Nygren, O
1983-12-01
Sampling and analysis of hexavalent chromium during exposure to chromic acid mist and welding fumes. Scand j work environ & health 9 (1983) 489-495. In view of the serious health effects of hexavalent chromium, the problems involved in its sampling and analysis in workroom air have been the subject of much concern. In this paper, the stability problems arising from the reduction of hexavalent to trivalent chromium during sampling, sample storage, and analysis are discussed. Replacement of sulfuric acid by a sodium acetate buffer (pH 4) as a leaching solution prior to analysis with the diphenylcarbazide (DPC) method is suggested and is demonstrated to be necessary in order to avoid reduction. Field samples were taken from two different industrial processes-manual metal arc welding on stainless steel without shield gas and chromium plating. A comparison was made of the DPC method, acidic dissolution with atomic absorption spectrophotometric (AAS) analysis, and the carbonate method. For chromic acid mist, the DPC method and AAS analysis were shown to give the same results. In the analysis of welding fumes, the modified DPC method gave the same results as the laborious and less sensitive carbonate method.
Gait Analysis Using Wearable Sensors
Tao, Weijun; Liu, Tao; Zheng, Rencheng; Feng, Hutian
2012-01-01
Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications. PMID:22438763
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
2016-01-01
Abstract Microarray gene expression data sets are jointly analyzed to increase statistical power. They could either be merged together or analyzed by meta-analysis. For a given ensemble of data sets, it cannot be foreseen which of these paradigms, merging or meta-analysis, works better. In this article, three joint analysis methods, Z -score normalization, ComBat and the inverse normal method (meta-analysis) were selected for survival prognosis and risk assessment of breast cancer patients. The methods were applied to eight microarray gene expression data sets, totaling 1324 patients with two clinical endpoints, overall survival and relapse-free survival. The performance derived from the joint analysis methods was evaluated using Cox regression for survival analysis and independent validation used as bias estimation. Overall, Z -score normalization had a better performance than ComBat and meta-analysis. Higher Area Under the Receiver Operating Characteristic curve and hazard ratio were also obtained when independent validation was used as bias estimation. With a lower time and memory complexity, Z -score normalization is a simple method for joint analysis of microarray gene expression data sets. The derived findings suggest further assessment of this method in future survival prediction and cancer classification applications. PMID:26504096
Meta-analysis and The Cochrane Collaboration: 20 years of the Cochrane Statistical Methods Group
2013-01-01
The Statistical Methods Group has played a pivotal role in The Cochrane Collaboration over the past 20 years. The Statistical Methods Group has determined the direction of statistical methods used within Cochrane reviews, developed guidance for these methods, provided training, and continued to discuss and consider new and controversial issues in meta-analysis. The contribution of Statistical Methods Group members to the meta-analysis literature has been extensive and has helped to shape the wider meta-analysis landscape. In this paper, marking the 20th anniversary of The Cochrane Collaboration, we reflect on the history of the Statistical Methods Group, beginning in 1993 with the identification of aspects of statistical synthesis for which consensus was lacking about the best approach. We highlight some landmark methodological developments that Statistical Methods Group members have contributed to in the field of meta-analysis. We discuss how the Group implements and disseminates statistical methods within The Cochrane Collaboration. Finally, we consider the importance of robust statistical methodology for Cochrane systematic reviews, note research gaps, and reflect on the challenges that the Statistical Methods Group faces in its future direction. PMID:24280020
Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
Uncovering the requirements of cognitive work.
Roth, Emilie M
2008-06-01
In this article, the author provides an overview of cognitive analysis methods and how they can be used to inform system analysis and design. Human factors has seen a shift toward modeling and support of cognitively intensive work (e.g., military command and control, medical planning and decision making, supervisory control of automated systems). Cognitive task analysis and cognitive work analysis methods extend traditional task analysis techniques to uncover the knowledge and thought processes that underlie performance in cognitively complex settings. The author reviews the multidisciplinary roots of cognitive analysis and the variety of cognitive task analysis and cognitive work analysis methods that have emerged. Cognitive analysis methods have been used successfully to guide system design, as well as development of function allocation, team structure, and training, so as to enhance performance and reduce the potential for error. A comprehensive characterization of cognitive work requires two mutually informing analyses: (a) examination of domain characteristics and constraints that define cognitive requirements and challenges and (b) examination of practitioner knowledge and strategies that underlie both expert and error-vulnerable performance. A variety of specific methods can be adapted to achieve these aims within the pragmatic constraints of particular projects. Cognitive analysis methods can be used effectively to anticipate cognitive performance problems and specify ways to improve individual and team cognitive performance (be it through new forms of training, user interfaces, or decision aids).
Rapid Method for Sodium Hydroxide Fusion of Concrete and ...
Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Americium-241, plutonium-238, plutonium-239, radium-226, strontium-90, uranium-234, uranium-235 and uranium-238 in concrete and brick samples Method Selected for: SAM lists this method for qualitative analysis of americium-241, plutonium-238, plutonium-239, radium-226, strontium-90, uranium-234, uranium-235 and uranium-238 in concrete or brick building materials. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.
On the Analysis of Output Information of S-tree Method
NASA Astrophysics Data System (ADS)
Bekaryan, Karen M.; Melkonyan, Anahit A.
2007-08-01
On of the most popular and effective method of analysis of hierarchical structure of N-body gravitating systems is method of S-tree diagrams. Apart from many interesting peculiarities, the method, unfortunately, is not free from some disadvantages, among which most important is an extremely complexity of analysis of output information. To solve this problem a number of methods are suggested. From our point of view, most effective approach is an application of all these methods simultaneousely. This allows to obtaine more complete and objective «picture» concerning a final distribution.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Probabilistic structural analysis methods for space transportation propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.
1991-01-01
Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .
Application of the pulsed fast/thermal neutron method for soil elemental analysis
USDA-ARS?s Scientific Manuscript database
Soil science is a research field where physic concepts and experimental methods are widely used, particularly in agro-chemistry and soil elemental analysis. Different methods of analysis are currently available. The evolution of nuclear physics (methodology and instrumentation) combined with the ava...
SWECS tower dynamics analysis methods and results
NASA Technical Reports Server (NTRS)
Wright, A. D.; Sexton, J. H.; Butterfield, C. P.; Thresher, R. M.
1981-01-01
Several different tower dynamics analysis methods and computer codes were used to determine the natural frequencies and mode shapes of both guyed and freestanding wind turbine towers. These analysis methods are described and the results for two types of towers, a guyed tower and a freestanding tower, are shown. The advantages and disadvantages in the use of and the accuracy of each method are also described.
NASA Technical Reports Server (NTRS)
Darras, R.
1979-01-01
The various types of nuclear chemical analysis methods are discussed. The possibilities of analysis through activation and direct observation of nuclear reactions are described. Such methods make it possible to analyze trace elements and impurities with selectivity, accuracy, and a high degree of sensitivity. Such methods are used in measuring major elements present in materials which are available for analysis only in small quantities. These methods are well suited to superficial analyses and to determination of concentration gradients; provided the nature and energy of the incident particles are chosen judiciously. Typical examples of steels, pure iron and refractory metals are illustrated.
A Century of Enzyme Kinetic Analysis, 1913 to 2013
Johnson, Kenneth A.
2013-01-01
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. PMID:23850893
Probabilistic boundary element method
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Raveendra, S. T.
1989-01-01
The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.
What Touched Your Heart? Collaborative Story Analysis Emerging From an Apsáalooke Cultural Context.
Hallett, John; Held, Suzanne; McCormick, Alma Knows His Gun; Simonds, Vanessa; Real Bird, Sloane; Martin, Christine; Simpson, Colleen; Schure, Mark; Turnsplenty, Nicole; Trottier, Coleen
2017-07-01
Community-based participatory research and decolonizing research share some recommendations for best practices for conducting research. One commonality is partnering on all stages of research; co-developing methods of data analysis is one stage with a deficit of partnering examples. We present a novel community-based and developed method for analyzing qualitative data within an Indigenous health study and explain incompatibilities of existing methods for our purposes and community needs. We describe how we explored available literature, received counsel from community Elders and experts in the field, and collaboratively developed a data analysis method consonant with community values. The method of analysis, in which interview/story remained intact, team members received story, made meaning through discussion, and generated a conceptual framework to inform intervention development, is detailed. We offer the development process and method as an example for researchers working with communities who want to keep stories intact during qualitative data analysis.
Development of an SPE/CE method for analyzing HAAs
Zhang, L.; Capel, P.D.; Hozalski, R.M.
2007-01-01
The haloacetic acid (HAA) analysis methods approved by the US Environmental Protection Agency involve extraction and derivatization of HAAs (typically to their methyl ester form) and analysis by gas chromatography (GC) with electron capture detection (ECD). Concerns associated with these methods include the time and effort of the derivatization process, use of potentially hazardous chemicals or conditions during methylation, poor recoveries because of low extraction efficiencies for some HAAs or matrix effects from sulfate, and loss of tribromoacetic acid because of decarboxylation. The HAA analysis method introduced here uses solid-phase extraction (SPE) followed by capillary electrophoresis (CE) analysis. The method is accurate, reproducible, sensitive, relatively safe, and easy to perform, and avoids the use of large amounts of solvent for liquid-liquid extraction and the potential hazards and hassles of derivatization. The cost of analyzing HAAs using this method should be lower than the currently approved methods, and utilities with a GC/ECD can perform the analysis in-house.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
Compendium of Methods for Applying Measured Data to Vibration and Acoustic Problems
1985-10-01
statistical energy analysis , finite element models, transfer function...Procedures for the Modal Analysis Method .............................................. 8-22 8.4 Summary of the Procedures for the Statistical Energy Analysis Method... statistical energy analysis . 8-1 • o + . . i... "_+,A" L + "+..• •+A ’! i, + +.+ +• o.+ -ore -+. • -..- , .%..% ". • 2 -".-2- ;.-.’, . o . It is helpful
Influence of ECG sampling rate in fetal heart rate variability analysis.
De Jonckheere, J; Garabedian, C; Charlier, P; Champion, C; Servan-Schreiber, E; Storme, L; Debarge, V; Jeanne, M; Logier, R
2017-07-01
Fetal hypoxia results in a fetal blood acidosis (pH<;7.10). In such a situation, the fetus develops several adaptation mechanisms regulated by the autonomic nervous system. Many studies demonstrated significant changes in heart rate variability in hypoxic fetuses. So, fetal heart rate variability analysis could be of precious help for fetal hypoxia prediction. Commonly used fetal heart rate variability analysis methods have been shown to be sensitive to the ECG signal sampling rate. Indeed, a low sampling rate could induce variability in the heart beat detection which will alter the heart rate variability estimation. In this paper, we introduce an original fetal heart rate variability analysis method. We hypothesize that this method will be less sensitive to ECG sampling frequency changes than common heart rate variability analysis methods. We then compared the results of this new heart rate variability analysis method with two different sampling frequencies (250-1000 Hz).
A scoping review of spatial cluster analysis techniques for point-event data.
Fritz, Charles E; Schuurman, Nadine; Robertson, Colin; Lear, Scott
2013-05-01
Spatial cluster analysis is a uniquely interdisciplinary endeavour, and so it is important to communicate and disseminate ideas, innovations, best practices and challenges across practitioners, applied epidemiology researchers and spatial statisticians. In this research we conducted a scoping review to systematically search peer-reviewed journal databases for research that has employed spatial cluster analysis methods on individual-level, address location, or x and y coordinate derived data. To illustrate the thematic issues raised by our results, methods were tested using a dataset where known clusters existed. Point pattern methods, spatial clustering and cluster detection tests, and a locally weighted spatial regression model were most commonly used for individual-level, address location data (n = 29). The spatial scan statistic was the most popular method for address location data (n = 19). Six themes were identified relating to the application of spatial cluster analysis methods and subsequent analyses, which we recommend researchers to consider; exploratory analysis, visualization, spatial resolution, aetiology, scale and spatial weights. It is our intention that researchers seeking direction for using spatial cluster analysis methods, consider the caveats and strengths of each approach, but also explore the numerous other methods available for this type of analysis. Applied spatial epidemiology researchers and practitioners should give special consideration to applying multiple tests to a dataset. Future research should focus on developing frameworks for selecting appropriate methods and the corresponding spatial weighting schemes.
An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks
NASA Astrophysics Data System (ADS)
Zhao, Peng-yuan; Huang, Xiao-ping
2018-03-01
Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
Analysis of Endocrine Disrupting Pesticides by Capillary GC with Mass Spectrometric Detection
Matisová, Eva; Hrouzková, Svetlana
2012-01-01
Endocrine disrupting chemicals, among them many pesticides, alter the normal functioning of the endocrine system of both wildlife and humans at very low concentration levels. Therefore, the importance of method development for their analysis in food and the environment is increasing. This also covers contributions in the field of ultra-trace analysis of multicomponent mixtures of organic pollutants in complex matrices. With this fact conventional capillary gas chromatography (CGC) and fast CGC with mass spectrometric detection (MS) has acquired a real importance in the analysis of endocrine disrupting pesticide (EDP) residues. This paper provides an overview of GC methods, including sample preparation steps, for analysis of EDPs in a variety of matrices at ultra-trace concentration levels. Emphasis is put on separation method, mode of MS detection and ionization and obtained limits of detection and quantification. Analysis time is one of the most important aspects that should be considered in the choice of analytical methods for routine analysis. Therefore, the benefits of developed fast GC methods are important. PMID:23202677
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman; Jeffrey C. Joe
2005-09-01
An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less
NASA Technical Reports Server (NTRS)
Eggleston, John M; Mathews, Charles W
1954-01-01
In the process of analyzing the longitudinal frequency-response characteristics of aircraft, information on some of the methods of analysis has been obtained by the Langley Aeronautical Laboratory of the National Advisory Committee for Aeronautics. In the investigation of these methods, the practical applications and limitations were stressed. In general, the methods considered may be classed as: (1) analysis of sinusoidal response, (2) analysis of transient response as to harmonic content through determination of the Fourier integral by manual or machine methods, and (3) analysis of the transient through the use of least-squares solutions of the coefficients of an assumed equation for either the transient time response or frequency response (sometimes referred to as curve-fitting methods). (author)
Emiralioğlu, Nagehan; Özçelik, Uğur; Yalçın, Ebru; Doğru, Deniz; Kiper, Nural
2016-01-01
Sweat test with Gibson Cooke (GC) method is the diagnostic gold standard for cystic fibrosis (CF). Recently, alternative methods have been introduced to simplify both the collection and analysis of sweat samples. Our aim was to compare sweat chloride values obtained by GC method with other sweat test methods in patients diagnosed with CF and whose CF diagnosis had been ruled out. We wanted to determine if the other sweat test methods could reliably identify patients with CF and differentiate them from healthy subjects. Chloride concentration was measured with GC method, chloride meter and sweat test analysis system; also conductivity was determined with sweat test analysis system. Forty eight patients with CF and 82 patients without CF underwent the sweat test, showing median sweat chloride values 98.9 mEq/L with GC method, 101 mmol/L with chloride meter, 87.8 mmol/L with sweat test analysis system. In non-CF group, median sweat chloride values were 16.8 mEq/L with GC method, 10.5 mmol/L with chloride meter, and 15.6 mmol/L with sweat test analysis system. Median conductivity value was 107.3 mmol/L in CF group and 32.1 mmol/L in non CF group. There was a strong positive correlation between GC method and the other sweat test methods with a statistical significance (r=0.85) in all subjects. Sweat chloride concentration and conductivity by other sweat test methods highly correlate with the GC method. We think that the other sweat test equipments can be used as reliably as the classic GC method to diagnose or exclude CF.
NASA Astrophysics Data System (ADS)
Yoo, Byungjin; Hirata, Katsuhiro; Oonishi, Atsurou
In this study, a coupled analysis method for flat panel speakers driven by giant magnetostrictive material (GMM) based actuator was developed. The sound field produced by a flat panel speaker that is driven by a GMM actuator depends on the vibration of the flat panel, this vibration is a result of magnetostriction property of the GMM. In this case, to predict the sound pressure level (SPL) in the audio-frequency range, it is necessary to take into account not only the magnetostriction property of the GMM but also the effect of eddy current and the vibration characteristics of the actuator and the flat panel. In this paper, a coupled electromagnetic-structural-acoustic analysis method is presented; this method was developed by using the finite element method (FEM). This analysis method is used to predict the performance of a flat panel speaker in the audio-frequency range. The validity of the analysis method is verified by comparing with the measurement results of a prototype speaker.
Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review
NASA Astrophysics Data System (ADS)
Liu, Wen-Lan; Xu, Yun-Dou; Yao, Jian-Tao; Zhao, Yong-Sheng
2017-11-01
The force analysis of overconstrained PMs is relatively complex and difficult, for which the methods have always been a research hotspot. However, few literatures analyze the characteristics and application scopes of the various methods, which is not convenient for researchers and engineers to master and adopt them properly. A review of the methods for force analysis of both passive and active overconstrained PMs is presented. The existing force analysis methods for these two kinds of overconstrained PMs are classified according to their main ideas. Each category is briefly demonstrated and evaluated from such aspects as the calculation amount, the comprehensiveness of considering limbs' deformation, and the existence of explicit expressions of the solutions, which provides an important reference for researchers and engineers to quickly find a suitable method. The similarities and differences between the statically indeterminate problem of passive overconstrained PMs and that of active overconstrained PMs are discussed, and a universal method for these two kinds of overconstrained PMs is pointed out. The existing deficiencies and development directions of the force analysis methods for overconstrained systems are indicated based on the overview.
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Methods of analysis. 133.5 Section 133.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Methods of analysis. 133.5 Section 133.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Methods of analysis. 133.5 Section 133.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...
Chapter A5. Section 6.1.F. Wastewater, Pharmaceutical, and Antibiotic Compounds
Lewis, Michael Edward; Zaugg, Steven D.
2003-01-01
The USGS differentiates between samples collected for analysis of wastewater compounds and those collected for analysis of pharmaceutical and antibiotic compounds, based on the analytical schedule for the laboratory method. Currently, only the wastewater laboratory method for field-filtered samples (SH1433) is an approved, routine (production) method. (The unfiltered wastewater method LC 8033 also is available but requires a proposal for custom analysis.) At this time, analysis of samples for pharmaceutical and antibiotic compounds is confined to research studies and is available only on a custom basis.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
Theoretical analysis of HVAC duct hanger systems
NASA Technical Reports Server (NTRS)
Miller, R. D.
1987-01-01
Several methods are presented which, together, may be used in the analysis of duct hanger systems over a wide range of frequencies. The finite element method (FEM) and component mode synthesis (CMS) method are used for low- to mid-frequency range computations and have been shown to yield reasonably close results. The statistical energy analysis (SEA) method yields predictions which agree with the CMS results for the 800 to 1000 Hz range provided that a sufficient number of modes participate. The CMS approach has been shown to yield valuable insight into the mid-frequency range of the analysis. It has been demonstrated that it is possible to conduct an analysis of a duct/hanger system in a cost-effective way for a wide frequency range, using several methods which overlap for several frequency bands.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
Multivariate analysis in thoracic research.
Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego
2015-03-01
Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Preliminary analysis techniques for ring and stringer stiffened cylindrical shells
NASA Technical Reports Server (NTRS)
Graham, J.
1993-01-01
This report outlines methods of analysis for the buckling of thin-walled circumferentially and longitudinally stiffened cylindrical shells. Methods of analysis for the various failure modes are presented in one cohesive package. Where applicable, more than one method of analysis for a failure mode is presented along with standard practices. The results of this report are primarily intended for use in launch vehicle design in the elastic range. A Microsoft Excel worksheet with accompanying macros has been developed to automate the analysis procedures.
1985-03-01
distribution. Samples of suspended partici’lates will also be collected for later image and elemental analysis . 25 Method of analysis for particle...will be flow injection analysis . This method will allow rapid, continuous analysis of seawater nutrients. Measurements will be made at one minute...5 m intervals) as well as from the underway pumping system. Method of pigment analysis for porphyrin and carotenoid pigments will be separation by
Methodology Series Module 10: Qualitative Health Research
Setia, Maninder Singh
2017-01-01
Although quantitative designs are commonly used in clinical research, some studies require qualitative methods. These designs are different from quantitative methods; thus, researchers should be aware of data collection methods and analyses for qualitative research. Qualitative methods are particularly useful to understand patient experiences with the treatment or new methods of management or to explore issues in detail. These methods are useful in social and behavioral research. In qualitative research, often, the main focus is to understand the issue in detail rather than generalizability; thus, the sampling methods commonly used are purposive sampling; quota sampling; and snowball sampling (for hard to reach groups). Data can be collected using in-depth interviews (IDIs) or focus group discussions (FGDs). IDI is a one-to-one interview with the participant. FGD is a method of group interview or discussion, in which more than one participant is interviewed at the same time and is usually led by a facilitator. The commonly used methods for data analysis are: thematic analysis; grounded theory analysis; and framework analysis. Qualitative data collection and analysis require special expertise. Hence, if the reader plans to conduct qualitative research, they should team up with a qualitative researcher. PMID:28794545
Atomistic cluster alignment method for local order mining in liquids and glasses
NASA Astrophysics Data System (ADS)
Fang, X. W.; Wang, C. Z.; Yao, Y. X.; Ding, Z. J.; Ho, K. M.
2010-11-01
An atomistic cluster alignment method is developed to identify and characterize the local atomic structural order in liquids and glasses. With the “order mining” idea for structurally disordered systems, the method can detect the presence of any type of local order in the system and can quantify the structural similarity between a given set of templates and the aligned clusters in a systematic and unbiased manner. Moreover, population analysis can also be carried out for various types of clusters in the system. The advantages of the method in comparison with other previously developed analysis methods are illustrated by performing the structural analysis for four prototype systems (i.e., pure Al, pure Zr, Zr35Cu65 , and Zr36Ni64 ). The results show that the cluster alignment method can identify various types of short-range orders (SROs) in these systems correctly while some of these SROs are difficult to capture by most of the currently available analysis methods (e.g., Voronoi tessellation method). Such a full three-dimensional atomistic analysis method is generic and can be applied to describe the magnitude and nature of noncrystalline ordering in many disordered systems.
Methodology Series Module 10: Qualitative Health Research.
Setia, Maninder Singh
2017-01-01
Although quantitative designs are commonly used in clinical research, some studies require qualitative methods. These designs are different from quantitative methods; thus, researchers should be aware of data collection methods and analyses for qualitative research. Qualitative methods are particularly useful to understand patient experiences with the treatment or new methods of management or to explore issues in detail. These methods are useful in social and behavioral research. In qualitative research, often, the main focus is to understand the issue in detail rather than generalizability; thus, the sampling methods commonly used are purposive sampling; quota sampling; and snowball sampling (for hard to reach groups). Data can be collected using in-depth interviews (IDIs) or focus group discussions (FGDs). IDI is a one-to-one interview with the participant. FGD is a method of group interview or discussion, in which more than one participant is interviewed at the same time and is usually led by a facilitator. The commonly used methods for data analysis are: thematic analysis; grounded theory analysis; and framework analysis. Qualitative data collection and analysis require special expertise. Hence, if the reader plans to conduct qualitative research, they should team up with a qualitative researcher.
Graf, Tyler N; Cech, Nadja B; Polyak, Stephen J; Oberlies, Nicholas H
2016-07-15
Validated methods are needed for the analysis of natural product secondary metabolites. These methods are particularly important to translate in vitro observations to in vivo studies. Herein, a method is reported for the analysis of the key secondary metabolites, a series of flavonolignans and a flavonoid, from an extract prepared from the seeds of milk thistle [Silybum marianum (L.) Gaertn. (Asteraceae)]. This report represents the first UHPLC MS-MS method validated for quantitative analysis of these compounds. The method takes advantage of the excellent resolution achievable with UHPLC to provide a complete analysis in less than 7min. The method is validated using both UV and MS detectors, making it applicable in laboratories with different types of analytical instrumentation available. Lower limits of quantitation achieved with this method range from 0.0400μM to 0.160μM with UV and from 0.0800μM to 0.160μM with MS. The new method is employed to evaluate variability in constituent composition in various commercial S. marianum extracts, and to show that storage of the milk thistle compounds in DMSO leads to degradation. Copyright © 2016 Elsevier B.V. All rights reserved.
Hauber, A Brett; González, Juan Marcos; Groothuis-Oudshoorn, Catharina G M; Prior, Thomas; Marshall, Deborah A; Cunningham, Charles; IJzerman, Maarten J; Bridges, John F P
2016-06-01
Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Analysis of Urinary Metabolites of Nerve and Blister Chemical Warfare Agents
2014-08-01
of CWAs. The analysis methods use UHPLC-MS/MS in Multiple Reaction Monitoring ( MRM ) mode to enhance the selectivity and sensitivity of the method...Chromatography Mass Spectrometry LOD Limit Of Detection LOQ Limit of Quantitation MRM Multiple Reaction Monitoring MSMS Tandem mass...urine [1]. Those analysis methods use UHPLC- MS/MS in Multiple Reaction Monitoring ( MRM ) mode to enhance the selectivity and sensitivity of the method
Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2015-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.
Variability of Currents in Great South Channel and Over Georges Bank: Observation and Modeling
1992-06-01
Rizzoli motivated me to study the driv:,: mechanism of stratified tidal rectification using diagnostic analysis methods . Conversations with Glen...drifter trajectories in the 1988 and 1989 surveys give further encouragement that the analysis method yields an accurate picture of the nontidal flow...harmonic truncation method . Scaling analysis argues that this method is not appropriate for a step topography because it is valid only when the
Nonlinear multivariate and time series analysis by neural network methods
NASA Astrophysics Data System (ADS)
Hsieh, William W.
2004-03-01
Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño-Southern Oscillation and the stratospheric quasi-biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real-world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Advancing our thinking in presence-only and used-available analysis.
Warton, David; Aarts, Geert
2013-11-01
1. The problems of analysing used-available data and presence-only data are equivalent, and this paper uses this equivalence as a platform for exploring opportunities for advancing analysis methodology. 2. We suggest some potential methodological advances in used-available analysis, made possible via lessons learnt in the presence-only literature, for example, using modern methods to improve predictive performance. We also consider the converse - potential advances in presence-only analysis inspired by used-available methodology. 3. Notwithstanding these potential advances in methodology, perhaps a greater opportunity is in advancing our thinking about how to apply a given method to a particular data set. 4. It is shown by example that strikingly different results can be achieved for a single data set by applying a given method of analysis in different ways - hence having chosen a method of analysis, the next step of working out how to apply it is critical to performance. 5. We review some key issues to consider in deciding how to apply an analysis method: apply the method in a manner that reflects the study design; consider data properties; and use diagnostic tools to assess how reasonable a given analysis is for the data at hand. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
Integrative Analysis of Prognosis Data on Multiple Cancer Subtypes
Liu, Jin; Huang, Jian; Zhang, Yawei; Lan, Qing; Rothman, Nathaniel; Zheng, Tongzhang; Ma, Shuangge
2014-01-01
Summary In cancer research, profiling studies have been extensively conducted, searching for genes/SNPs associated with prognosis. Cancer is diverse. Examining the similarity and difference in the genetic basis of multiple subtypes of the same cancer can lead to a better understanding of their connections and distinctions. Classic meta-analysis methods analyze each subtype separately and then compare analysis results across subtypes. Integrative analysis methods, in contrast, analyze the raw data on multiple subtypes simultaneously and can outperform meta-analysis methods. In this study, prognosis data on multiple subtypes of the same cancer are analyzed. An AFT (accelerated failure time) model is adopted to describe survival. The genetic basis of multiple subtypes is described using the heterogeneity model, which allows a gene/SNP to be associated with prognosis of some subtypes but not others. A compound penalization method is developed to identify genes that contain important SNPs associated with prognosis. The proposed method has an intuitive formulation and is realized using an iterative algorithm. Asymptotic properties are rigorously established. Simulation shows that the proposed method has satisfactory performance and outperforms a penalization-based meta-analysis method and a regularized thresholding method. An NHL (non-Hodgkin lymphoma) prognosis study with SNP measurements is analyzed. Genes associated with the three major subtypes, namely DLBCL, FL, and CLL/SLL, are identified. The proposed method identifies genes that are different from alternatives and have important implications and satisfactory prediction performance. PMID:24766212
Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W
1988-08-01
The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.
Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin
2018-05-01
Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
This presentation describes EPA Method 537 for the analysis of 14 perfluorinated alkyl acids in drinking water as well as the challenges associated with preparing a laboratory for analysis using Method 537.
RESEARCH METHOD FOR SAMPLING AND ANALYSIS OF FIBROUS AMPHIBOLE IN VERMICULITE ATTIC INSULATION
NRMRL hosted a meeting on July 17-18, 2003 entitled, "Analytical Method for Bulk Analysis of Vermiculite." The purpose of this effort was to produce an interim research method for use by U.S. EPA's Office of Research and Development (ORD) for the analysis of bulk vermiculite for...
Turbulence excited frequency domain damping measurement and truncation effects
NASA Technical Reports Server (NTRS)
Soovere, J.
1976-01-01
Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
A Practical Method of Policy Analysis by Simulating Policy Options
ERIC Educational Resources Information Center
Phelps, James L.
2011-01-01
This article focuses on a method of policy analysis that has evolved from the previous articles in this issue. The first section, "Toward a Theory of Educational Production," identifies concepts from science and achievement production to be incorporated into this policy analysis method. Building on Kuhn's (1970) discussion regarding paradigms, the…
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.
1990-01-01
An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.
A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.
Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco
2018-01-01
One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.
Heading in the right direction: thermodynamics-based network analysis and pathway engineering.
Ataman, Meric; Hatzimanikatis, Vassily
2015-12-01
Thermodynamics-based network analysis through the introduction of thermodynamic constraints in metabolic models allows a deeper analysis of metabolism and guides pathway engineering. The number and the areas of applications of thermodynamics-based network analysis methods have been increasing in the last ten years. We review recent applications of these methods and we identify the areas that such analysis can contribute significantly, and the needs for future developments. We find that organisms with multiple compartments and extremophiles present challenges for modeling and thermodynamics-based flux analysis. The evolution of current and new methods must also address the issues of the multiple alternatives in flux directionalities and the uncertainties and partial information from analytical methods. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Szafranko, E.
2017-08-01
When planning a building structure, dilemmas arise as to what construction and material solutions are feasible. The decisions are not always obvious. A procedure for selecting the variant that will best satisfy the expectations of the investor and future users of a structure must be founded on mathematical methods. The following deserve special attention: the MCE methods, Hierarchical Analysis Methods and Weighting Methods. Another interesting solution, particularly useful when dealing with evaluations which take into account negative values, is the Indicator Method. MCE methods are relatively popular owing to the simplicity of the calculations and ease of the interpretation of the results. Having prepared the input data properly, they enable the user to compare them on the same level. In a situation where an analysis involves a large number of data, it is more convenient to divide them into groups according to main criteria and subcriteria. This option is provided by hierarchical analysis methods. They are based on ordered sets of criteria, which are evaluated in groups. In some cases, this approach yields the results that are superior and easier to read. If an analysis encompasses direct and indirect effects, an Indicator Method seems to be a justified choice for selecting the right solution. The Indicator Method is different in character and relies on weights and assessments of effects. It allows the user to evaluate effectively the analyzed variants. This article explains the methodology of conducting a multi-criteria analysis, showing its advantages and disadvantages. An example of calculations contained in the article shows what problems can be encountered when making an assessment of various solutions regarding building materials and structures. For comparison, an analysis based on graphical methods developed by the author was presented.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin
2013-01-01
Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509
Mubayi, Anuj; Castillo-Chavez, Carlos
2018-01-01
Background When attempting to statistically distinguish between a null and an alternative hypothesis, many researchers in the life and social sciences turn to binned statistical analysis methods, or methods that are simply based on the moments of a distribution (such as the mean, and variance). These methods have the advantage of simplicity of implementation, and simplicity of explanation. However, when null and alternative hypotheses manifest themselves in subtle differences in patterns in the data, binned analysis methods may be insensitive to these differences, and researchers may erroneously fail to reject the null hypothesis when in fact more sensitive statistical analysis methods might produce a different result when the null hypothesis is actually false. Here, with a focus on two recent conflicting studies of contagion in mass killings as instructive examples, we discuss how the use of unbinned likelihood methods makes optimal use of the information in the data; a fact that has been long known in statistical theory, but perhaps is not as widely appreciated amongst general researchers in the life and social sciences. Methods In 2015, Towers et al published a paper that quantified the long-suspected contagion effect in mass killings. However, in 2017, Lankford & Tomek subsequently published a paper, based upon the same data, that claimed to contradict the results of the earlier study. The former used unbinned likelihood methods, and the latter used binned methods, and comparison of distribution moments. Using these analyses, we also discuss how visualization of the data can aid in determination of the most appropriate statistical analysis methods to distinguish between a null and alternate hypothesis. We also discuss the importance of assessment of the robustness of analysis results to methodological assumptions made (for example, arbitrary choices of number of bins and bin widths when using binned methods); an issue that is widely overlooked in the literature, but is critical to analysis reproducibility and robustness. Conclusions When an analysis cannot distinguish between a null and alternate hypothesis, care must be taken to ensure that the analysis methodology itself maximizes the use of information in the data that can distinguish between the two hypotheses. The use of binned methods by Lankford & Tomek (2017), that examined how many mass killings fell within a 14 day window from a previous mass killing, substantially reduced the sensitivity of their analysis to contagion effects. The unbinned likelihood methods used by Towers et al (2015) did not suffer from this problem. While a binned analysis might be favorable for simplicity and clarity of presentation, unbinned likelihood methods are preferable when effects might be somewhat subtle. PMID:29742115
Advanced stress analysis methods applicable to turbine engine structures
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1985-01-01
Advanced stress analysis methods applicable to turbine engine structures are investigated. Constructions of special elements which containing traction-free circular boundaries are investigated. New versions of mixed variational principle and version of hybrid stress elements are formulated. A method is established for suppression of kinematic deformation modes. semiLoof plate and shell elements are constructed by assumed stress hybrid method. An elastic-plastic analysis is conducted by viscoplasticity theory using the mechanical subelement model.
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
Evaluation of a cost-effective loads approach. [shock spectra/impedance method for Viking Orbiter
NASA Technical Reports Server (NTRS)
Garba, J. A.; Wada, B. K.; Bamford, R.; Trubert, M. R.
1976-01-01
A shock spectra/impedance method for loads predictions is used to estimate member loads for the Viking Orbiter, a 7800-lb interplanetary spacecraft that has been designed using transient loads analysis techniques. The transient loads analysis approach leads to a lightweight structure but requires complex and costly analyses. To reduce complexity and cost, a shock spectra/impedance method is currently being used to design the Mariner Jupiter Saturn spacecraft. This method has the advantage of using low-cost in-house loads analysis techniques and typically results in more conservative structural loads. The method is evaluated by comparing the increase in Viking member loads to the loads obtained by the transient loads analysis approach. An estimate of the weight penalty incurred by using this method is presented. The paper also compares the calculated flight loads from the transient loads analyses and the shock spectra/impedance method to measured flight data.
and algal biomass analysis methods and applications of these methods to different processes. Templeton , internally funded research project to develop microalgal compositional analysis methods that included setting methods Closing mass and component balances around pretreatment, saccharification, and fermentation unit
Coding and Commonality Analysis: Non-ANOVA Methods for Analyzing Data from Experiments.
ERIC Educational Resources Information Center
Thompson, Bruce
The advantages and disadvantages of three analytic methods used to analyze experimental data in educational research are discussed. The same hypothetical data set is used with all methods for a direct comparison. The Analysis of Variance (ANOVA) method and its several analogs are collectively labeled OVA methods and are evaluated. Regression…
ERIC Educational Resources Information Center
Hale, Robert L.; Dougherty, Donna
1988-01-01
Compared the efficacy of two methods of cluster analysis, the unweighted pair-groups method using arithmetic averages (UPGMA) and Ward's method, for students grouped on intelligence, achievement, and social adjustment by both clustering methods. Found UPGMA more efficacious based on output, on cophenetic correlation coefficients generated by each…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-01-01
The set of documents discusses the new draft methods (EPA method 551, EPA method 552) for the analysis of disinfection byproducts contained in drinking water. The methods use the techniques of liquid/liquid extraction and gas chromatography with electron capture detection.
Development of a probabilistic analysis methodology for structural reliability estimation
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.
1991-01-01
The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
Steroid hormones in environmental matrices: extraction method comparison.
Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon
2017-11-09
The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.
Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.
Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H
2016-06-01
Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.
Buckling analysis and test correlation of hat stiffened panels for hypersonic vehicles
NASA Technical Reports Server (NTRS)
Percy, Wendy C.; Fields, Roger A.
1990-01-01
The paper discusses the design, analysis, and test of hat stiffened panels subjected to a variety of thermal and mechanical load conditions. The panels were designed using data from structural optimization computer codes and finite element analysis. Test methods included the grid shadow moire method and a single gage force stiffness method. The agreement between the test data and analysis provides confidence in the methods that are currently being used to design structures for hypersonic vehicles. The agreement also indicates that post buckled strength may potentially be used to reduce the vehicle weight.
Brenn, T; Arnesen, E
1985-01-01
For comparative evaluation, discriminant analysis, logistic regression and Cox's model were used to select risk factors for total and coronary deaths among 6595 men aged 20-49 followed for 9 years. Groups with mortality between 5 and 93 per 1000 were considered. Discriminant analysis selected variable sets only marginally different from the logistic and Cox methods which always selected the same sets. A time-saving option, offered for both the logistic and Cox selection, showed no advantage compared with discriminant analysis. Analysing more than 3800 subjects, the logistic and Cox methods consumed, respectively, 80 and 10 times more computer time than discriminant analysis. When including the same set of variables in non-stepwise analyses, all methods estimated coefficients that in most cases were almost identical. In conclusion, discriminant analysis is advocated for preliminary or stepwise analysis, otherwise Cox's method should be used.
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Conducting qualitative research in mental health: Thematic and content analyses.
Crowe, Marie; Inder, Maree; Porter, Richard
2015-07-01
The objective of this paper is to describe two methods of qualitative analysis - thematic analysis and content analysis - and to examine their use in a mental health context. A description of the processes of thematic analysis and content analysis is provided. These processes are then illustrated by conducting two analyses of the same qualitative data. Transcripts of qualitative interviews are analysed using each method to illustrate these processes. The illustration of the processes highlights the different outcomes from the same set of data. Thematic and content analyses are qualitative methods that serve different research purposes. Thematic analysis provides an interpretation of participants' meanings, while content analysis is a direct representation of participants' responses. These methods provide two ways of understanding meanings and experiences and provide important knowledge in a mental health context. © The Royal Australian and New Zealand College of Psychiatrists 2015.
Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
Advantages of Social Network Analysis in Educational Research
ERIC Educational Resources Information Center
Ushakov, K. M.; Kukso, K. N.
2015-01-01
Currently one of the main tools for the large scale studies of schools is statistical analysis. Although it is the most common method and it offers greatest opportunities for analysis, there are other quantitative methods for studying schools, such as network analysis. We discuss the potential advantages that network analysis has for educational…
Exploratory factor analysis in Rehabilitation Psychology: a content analysis.
Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N
2014-11-01
Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Leonard, Annemarie K; Loughran, Elizabeth A; Klymenko, Yuliya; Liu, Yueying; Kim, Oleg; Asem, Marwa; McAbee, Kevin; Ravosa, Matthew J; Stack, M Sharon
2018-01-01
This chapter highlights methods for visualization and analysis of extracellular matrix (ECM) proteins, with particular emphasis on collagen type I, the most abundant protein in mammals. Protocols described range from advanced imaging of complex in vivo matrices to simple biochemical analysis of individual ECM proteins. The first section of this chapter describes common methods to image ECM components and includes protocols for second harmonic generation, scanning electron microscopy, and several histological methods of ECM localization and degradation analysis, including immunohistochemistry, Trichrome staining, and in situ zymography. The second section of this chapter details both a common transwell invasion assay and a novel live imaging method to investigate cellular behavior with respect to collagen and other ECM proteins of interest. The final section consists of common electrophoresis-based biochemical methods that are used in analysis of ECM proteins. Use of the methods described herein will enable researchers to gain a greater understanding of the role of ECM structure and degradation in development and matrix-related diseases such as cancer and connective tissue disorders. © 2018 Elsevier Inc. All rights reserved.
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Highly comparative time-series analysis: the empirical structure of time series and their methods
Fulcher, Ben D.; Little, Max A.; Jones, Nick S.
2013-01-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344
Decerns: A framework for multi-criteria decision analysis
Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; ...
2015-02-27
A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.
Tan, Ming T; Liu, Jian-ping; Lao, Lixing
2012-08-01
Recently, proper use of the statistical methods in traditional Chinese medicine (TCM) randomized controlled trials (RCTs) has received increased attention. Statistical inference based on hypothesis testing is the foundation of clinical trials and evidence-based medicine. In this article, the authors described the methodological differences between literature published in Chinese and Western journals in the design and analysis of acupuncture RCTs and the application of basic statistical principles. In China, qualitative analysis method has been widely used in acupuncture and TCM clinical trials, while the between-group quantitative analysis methods on clinical symptom scores are commonly used in the West. The evidence for and against these analytical differences were discussed based on the data of RCTs assessing acupuncture for pain relief. The authors concluded that although both methods have their unique advantages, quantitative analysis should be used as the primary analysis while qualitative analysis can be a secondary criterion for analysis. The purpose of this paper is to inspire further discussion of such special issues in clinical research design and thus contribute to the increased scientific rigor of TCM research.
What Touched Your Heart? Collaborative Story Analysis Emerging From an Apsáalooke Cultural Context
Hallett, John; Held, Suzanne; McCormick, Alma Knows His Gun; Simonds, Vanessa; Bird, Sloane Real; Martin, Christine; Simpson, Colleen; Schure, Mark; Turnsplenty, Nicole; Trottier, Coleen
2017-01-01
Community-based participatory research and decolonizing research share some recommendations for best practices for conducting research. One commonality is partnering on all stages of research; co-developing methods of data analysis is one stage with a deficit of partnering examples. We present a novel community-based and developed method for analyzing qualitative data within an Indigenous health study and explain incompatibilities of existing methods for our purposes and community needs. We describe how we explored available literature, received counsel from community Elders and experts in the field, and collaboratively developed a data analysis method consonant with community values. The method of analysis, in which interview/story remained intact, team members received story, made meaning through discussion, and generated a conceptual framework to inform intervention development, is detailed. We offer the development process and method as an example for researchers working with communities who want to keep stories intact during qualitative data analysis. PMID:27659019
NASA Astrophysics Data System (ADS)
Sun, K.; Cheng, D. B.; He, J. J.; Zhao, Y. L.
2018-02-01
Collapse gully erosion is a specific type of soil erosion in the red soil region of southern China, and early warning and prevention of the occurrence of collapse gully erosion is very important. Based on the idea of risk assessment, this research, taking Guangdong province as an example, adopt the information acquisition analysis and the logistic regression analysis, to discuss the feasibility for collapse gully erosion risk assessment in regional scale, and compare the applicability of the different risk assessment methods. The results show that in the Guangdong province, the risk degree of collapse gully erosion occurrence is high in northeastern and western area, and relatively low in southwestern and central part. The comparing analysis of the different risk assessment methods on collapse gully also indicated that the risk distribution patterns from the different methods were basically consistent. However, the accuracy of risk map from the information acquisition analysis method was slightly better than that from the logistic regression analysis method.
Analysis of beryllium and depleted uranium: An overview of detection methods in aerosols and soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camins, I.; Shinn, J.H.
We conducted a survey of commercially available methods for analysis of beryllium and depleted uranium in aerosols and soils to find a reliable, cost-effective, and sufficiently precise method for researchers involved in environmental testing at the Yuma Proving Ground, Yuma, Arizona. Criteria used for evaluation include cost, method of analysis, specificity, sensitivity, reproducibility, applicability, and commercial availability. We found that atomic absorption spectrometry with graphite furnace meets these criteria for testing samples for beryllium. We found that this method can also be used to test samples for depleted uranium. However, atomic absorption with graphite furnace is not as sensitive amore » measurement method for depleted uranium as it is for beryllium, so we recommend that quality control of depleted uranium analysis be maintained by testing 10 of every 1000 samples by neutron activation analysis. We also evaluated 45 companies and institutions that provide analyses of beryllium and depleted uranium. 5 refs., 1 tab.« less
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Experience report: Using formal methods for requirements analysis of critical spacecraft software
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.; Ampo, Yoko
1994-01-01
Formal specification and analysis of requirements continues to gain support as a method for producing more reliable software. However, the introduction of formal methods to a large software project is difficult, due in part to the unfamiliarity of the specification languages and the lack of graphics. This paper reports results of an investigation into the effectiveness of formal methods as an aid to the requirements analysis of critical, system-level fault-protection software on a spacecraft currently under development. Our experience indicates that formal specification and analysis can enhance the accuracy of the requirements and add assurance prior to design development in this domain. The work described here is part of a larger, NASA-funded research project whose purpose is to use formal-methods techniques to improve the quality of software in space applications. The demonstration project described here is part of the effort to evaluate experimentally the effectiveness of supplementing traditional engineering approaches to requirements specification with the more rigorous specification and analysis available with formal methods.
Measurement of edge residual stresses in glass by the phase-shifting method
NASA Astrophysics Data System (ADS)
Ajovalasit, A.; Petrucci, G.; Scafidi, M.
2011-05-01
Control and measurement of residual stress in glass is of great importance in the industrial field. Since glass is a birefringent material, the residual stress analysis is based mainly on the photoelastic method. This paper considers two methods of automated analysis of membrane residual stress in glass sheets, based on the phase-shifting concept in monochromatic light. In particular these methods are the automated versions of goniometric compensation methods of Tardy and Sénarmont. The proposed methods can effectively replace manual methods of compensation (goniometric compensation of Tardy and Sénarmont, Babinet and Babinet-Soleil compensators) provided by current standards on the analysis of residual stresses in glasses.
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
METHOD OF CHEMICAL ANALYSIS FOR OIL SHALE WASTES
Several methods of chemical analysis are described for oil shale wastewaters and retort gases. These methods are designed to support the field testing of various pollution control systems. As such, emphasis has been placed on methods which are rapid and sufficiently rugged to per...
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
An Excel‐based implementation of the spectral method of action potential alternans analysis
Pearman, Charles M.
2014-01-01
Abstract Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro‐arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T‐wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. PMID:25501439
Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan
2017-12-27
Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.
Method for combined biometric and chemical analysis of human fingerprints.
Staymates, Jessica L; Orandi, Shahram; Staymates, Matthew E; Gillen, Greg
This paper describes a method for combining direct chemical analysis of latent fingerprints with subsequent biometric analysis within a single sample. The method described here uses ion mobility spectrometry (IMS) as a chemical detection method for explosives and narcotics trace contamination. A collection swab coated with a high-temperature adhesive has been developed to lift latent fingerprints from various surfaces. The swab is then directly inserted into an IMS instrument for a quick chemical analysis. After the IMS analysis, the lifted print remains intact for subsequent biometric scanning and analysis using matching algorithms. Several samples of explosive-laden fingerprints were successfully lifted and the explosives detected with IMS. Following explosive detection, the lifted fingerprints remained of sufficient quality for positive match scores using a prepared gallery consisting of 60 fingerprints. Based on our results ( n = 1200), there was no significant decrease in the quality of the lifted print post IMS analysis. In fact, for a small subset of lifted prints, the quality was improved after IMS analysis. The described method can be readily applied to domestic criminal investigations, transportation security, terrorist and bombing threats, and military in-theatre settings.
A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal
Mohapatra, Biswajit
2018-01-01
Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis. PMID:29854361
Nonlinear analysis of structures. [within framework of finite element method
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.
1974-01-01
The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.
A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal.
Nayak, Suraj K; Bit, Arindam; Dey, Anilesh; Mohapatra, Biswajit; Pal, Kunal
2018-01-01
Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis.
Paul C. Van Deusen; Linda S. Heath
2010-01-01
Weighted estimation methods for analysis of mapped plot forest inventory data are discussed. The appropriate weighting scheme can vary depending on the type of analysis and graphical display. Both statistical issues and user expectations need to be considered in these methods. A weighting scheme is proposed that balances statistical considerations and the logical...
Decision Making Methods in Space Economics and Systems Engineering
NASA Technical Reports Server (NTRS)
Shishko, Robert
2006-01-01
This viewgraph presentation reviews various methods of decision making and the impact that they have on space economics and systems engineering. Some of the methods discussed are: Present Value and Internal Rate of Return (IRR); Cost-Benefit Analysis; Real Options; Cost-Effectiveness Analysis; Cost-Utility Analysis; Multi-Attribute Utility Theory (MAUT); and Analytic Hierarchy Process (AHP).
USDA-ARS?s Scientific Manuscript database
A method is demonstrated for analysis of vitamin D-fortified dietary supplements that eliminates virtually all chemical pretreatment prior to analysis, and is referred to as a ‘dilute and shoot’ method. Three mass spectrometers, in parallel, plus a UV detector, an evaporative light scattering detec...
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
Benefit Analysis of Proposed Information Systems
1991-03-01
be evaluated in such an analysis. Different benefit comparison and user satisfaction methods are reviewed for their particular advantages and...Different benefit comparison and user satisfaction methods are reviewed for their particular advantages and disadvantages. A discussion is given on...determine the alternative that is the most advantageous to the government. Secondly, which benefit analysis methods are capable of calculating a