Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
Piecewise polynomial representations of genomic tracks.
Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz
2012-01-01
Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.
NASA Astrophysics Data System (ADS)
Beretta, Elena; Micheletti, Stefano; Perotto, Simona; Santacesaria, Matteo
2018-01-01
In this paper, we develop a shape optimization-based algorithm for the electrical impedance tomography (EIT) problem of determining a piecewise constant conductivity on a polygonal partition from boundary measurements. The key tool is to use a distributed shape derivative of a suitable cost functional with respect to movements of the partition. Numerical simulations showing the robustness and accuracy of the method are presented for simulated test cases in two dimensions.
NASA Astrophysics Data System (ADS)
Zhang, Zhengfang; Chen, Weifeng
2018-05-01
Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
2008-06-01
Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The
Interface with weakly singular points always scatter
NASA Astrophysics Data System (ADS)
Li, Long; Hu, Guanghui; Yang, Jiansheng
2018-07-01
Assume that a bounded scatterer is embedded into an infinite homogeneous isotropic background medium in two dimensions. The refractive index function is supposed to be piecewise constant. If the scattering interface contains a weakly singular point, we prove that the scattered field cannot vanish identically. This implies the absence of non-scattering energies for piecewise analytic interfaces with one singular point. Local uniqueness is obtained for shape identification problems in inverse medium scattering with a single far-field pattern.
Supplemental Analysis on Compressed Sensing Based Interior Tomography
Yu, Hengyong; Yang, Jiansheng; Jiang, Ming; Wang, Ge
2010-01-01
Recently, in the compressed sensing framework we proved that an interior ROI can be exactly reconstructed via the total variation minimization if the ROI is piecewise constant. In the proofs, we implicitly utilized the property that if an artifact image assumes a constant value within the ROI then this constant must be zero. Here we prove this property in the space of square integrable functions. PMID:19717891
NASA Astrophysics Data System (ADS)
Orozco Cortés, Luis Fernando; Fernández García, Nicolás
2014-05-01
A method to obtain the general solution of any constant piecewise potential is presented, this is achieved by means of the analysis of the transfer matrices in each cutoff. The resonance phenomenon together with the supersymmetric quantum mechanics technique allow us to construct a wide family of complex potentials which can be used as theoretical models for optical systems. The method is applied to the particular case for which the potential function has six cutoff points.
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
NASA Astrophysics Data System (ADS)
Adrian, S. B.; Andriulli, F. P.; Eibert, T. F.
2017-02-01
A new hierarchical basis preconditioner for the electric field integral equation (EFIE) operator is introduced. In contrast to existing hierarchical basis preconditioners, it works on arbitrary meshes and preconditions both the vector and the scalar potential within the EFIE operator. This is obtained by taking into account that the vector and the scalar potential discretized with loop-star basis functions are related to the hypersingular and the single layer operator (i.e., the well known integral operators from acoustics). For the single layer operator discretized with piecewise constant functions, a hierarchical preconditioner can easily be constructed. Thus the strategy we propose in this work for preconditioning the EFIE is the transformation of the scalar and the vector potential into operators equivalent to the single layer operator and to its inverse. More specifically, when the scalar potential is discretized with star functions as source and testing functions, the resulting matrix is a single layer operator discretized with piecewise constant functions and multiplied left and right with two additional graph Laplacian matrices. By inverting these graph Laplacian matrices, the discretized single layer operator is obtained, which can be preconditioned with the hierarchical basis. Dually, when the vector potential is discretized with loop functions, the resulting matrix can be interpreted as a hypersingular operator discretized with piecewise linear functions. By leveraging on a scalar Calderón identity, we can interpret this operator as spectrally equivalent to the inverse single layer operator. Then we use a linear-in-complexity, closed-form inverse of the dual hierarchical basis to precondition the hypersingular operator. The numerical results show the effectiveness of the proposed preconditioner and the practical impact of theoretical developments in real case scenarios.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2015-01-01
Variable-Domain Displacement Transfer Functions were formulated for shape predictions of complex wing structures, for which surface strain-sensing stations must be properly distributed to avoid jointed junctures, and must be increased in the high strain gradient region. Each embedded beam (depth-wise cross section of structure along a surface strain-sensing line) was discretized into small variable domains. Thus, the surface strain distribution can be described with a piecewise linear or a piecewise nonlinear function. Through discretization, the embedded beam curvature equation can be piece-wisely integrated to obtain the Variable-Domain Displacement Transfer Functions (for each embedded beam), which are expressed in terms of geometrical parameters of the embedded beam and the surface strains along the strain-sensing line. By inputting the surface strain data into the Displacement Transfer Functions, slopes and deflections along each embedded beam can be calculated for mapping out overall structural deformed shapes. A long tapered cantilever tubular beam was chosen for shape prediction analysis. The input surface strains were analytically generated from finite-element analysis. The shape prediction accuracies of the Variable- Domain Displacement Transfer Functions were then determined in light of the finite-element generated slopes and deflections, and were fofound to be comparable to the accuracies of the constant-domain Displacement Transfer Functions
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-28
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
Controllability of semi-infinite rod heating by a point source
NASA Astrophysics Data System (ADS)
Khurshudyan, A.
2018-04-01
The possibility of control over heating of a semi-infinite thin rod by a point source concentrated at an inner point of the rod, is studied. Quadratic and piecewise constant solutions of the problem are derived, and the possibilities of solving appropriate problems of optimal control are indicated. Determining of the parameters of the piecewise constant solution is reduced to a problem of nonlinear programming. Numerical examples are considered.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Weak-noise limit of a piecewise-smooth stochastic differential equation.
Chen, Yaming; Baule, Adrian; Touchette, Hugo; Just, Wolfram
2013-11-01
We investigate the validity and accuracy of weak-noise (saddle-point or instanton) approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example a piecewise-constant SDE, which serves as a simple model of Brownian motion with solid friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided the singularity of the path integral associated with the nonsmooth SDE is treated with some heuristics. We also show that, as in the case of smooth SDEs, the deterministic paths of the noiseless system correctly describe the behavior of the nonsmooth SDE in the low-noise limit. Finally, we consider a smooth regularization of the piecewise-constant SDE and study to what extent this regularization can rectify some of the problems encountered when dealing with discontinuous drifts and singularities in SDEs.
Unified halo-independent formalism from convex hulls for direct dark matter searches
NASA Astrophysics Data System (ADS)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2017-12-01
Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina
2017-12-01
We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
Filter-based multiscale entropy analysis of complex physiological time series.
Xu, Yuesheng; Zhao, Liang
2013-08-01
Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044
Wu, Ailong; Liu, Ling; Huang, Tingwen; Zeng, Zhigang
2017-01-01
Neurodynamic system is an emerging research field. To understand the essential motivational representations of neural activity, neurodynamics is an important question in cognitive system research. This paper is to investigate Mittag-Leffler stability of a class of fractional-order neural networks in the presence of generalized piecewise constant arguments. To identify neural types of computational principles in mathematical and computational analysis, the existence and uniqueness of the solution of neurodynamic system is the first prerequisite. We prove that the existence and uniqueness of the solution of the network holds when some conditions are satisfied. In addition, self-active neurodynamic system demands stable internal dynamical states (equilibria). The main emphasis will be then on several sufficient conditions to guarantee a unique equilibrium point. Furthermore, to provide deeper explanations of neurodynamic process, Mittag-Leffler stability is studied in detail. The established results are based on the theories of fractional differential equation and differential equation with generalized piecewise constant arguments. The derived criteria improve and extend the existing related results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hadida, Jonathan; Desrosiers, Christian; Duong, Luc
2011-03-01
The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.
Mixed Legendre moments and discrete scattering cross sections for anisotropy representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calloo, A.; Vidal, J. F.; Le Tellier, R.
2012-07-01
This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, whichmore » better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)« less
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
NASA Astrophysics Data System (ADS)
Huang, Qing-Guo; Li, Miao; Li, Xiao-Dong; Wang, Shuang
2009-10-01
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant Λ, where the equation of state (EOS) w and the energy density ρΛ of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant ρΛ in each bin, respectively. It is found that for fitting the Constitution set alone, w and ρΛ will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which ρΛ rapidly decreases at redshift z˜0.331 presents a significant improvement (Δχ2=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant Λ at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant ρΛ model always performs better than a piecewise constant w model; this shows the advantage of using ρΛ, instead of w, to probe the variation of DE.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Qingguo; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190; Li Miao
2009-10-15
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant {lambda}, where the equation of state (EOS) w and the energy density {rho}{sub {lambda}} of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant {rho}{sub {lambda}} in each bin,more » respectively. It is found that for fitting the Constitution set alone, w and {rho}{sub {lambda}} will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which {rho}{sub {lambda}} rapidly decreases at redshift z{approx}0.331 presents a significant improvement ({delta}{chi}{sup 2}=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant {lambda} at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant {rho}{sub {lambda}} model always performs better than a piecewise constant w model; this shows the advantage of using {rho}{sub {lambda}}, instead of w, to probe the variation of DE.« less
Time-temperature effect in adhesively bonded joints
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1981-01-01
The viscoelastic analysis of an adhesively bonded lap joint was reconsidered. The adherends are approximated by essentially Reissner plates and the adhesive is linearly viscoelastic. The hereditary integrals are used to model the adhesive. A linear integral differential equations system for the shear and the tensile stress in the adhesive is applied. The equations have constant coefficients and are solved by using Laplace transforms. It is shown that if the temperature variation in time can be approximated by a piecewise constant function, then the method of Laplace transforms can be used to solve the problem. A numerical example is given for a single lap joint under various loading conditions.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
In the formulations of earlier Displacement Transfer Functions for structure shape predictions, the surface strain distributions, along a strain-sensing line, were represented with piecewise linear functions. To improve the shape-prediction accuracies, Improved Displacement Transfer Functions were formulated using piecewise nonlinear strain representations. Through discretization of an embedded beam (depth-wise cross section of a structure along a strain-sensing line) into multiple small domains, piecewise nonlinear functions were used to describe the surface strain distributions along the discretized embedded beam. Such piecewise approach enabled the piecewise integrations of the embedded beam curvature equations to yield slope and deflection equations in recursive forms. The resulting Improved Displacement Transfer Functions, written in summation forms, were expressed in terms of beam geometrical parameters and surface strains along the strain-sensing line. By feeding the surface strains into the Improved Displacement Transfer Functions, structural deflections could be calculated at multiple points for mapping out the overall structural deformed shapes for visual display. The shape-prediction accuracies of the Improved Displacement Transfer Functions were then examined in view of finite-element-calculated deflections using different tapered cantilever tubular beams. It was found that by using the piecewise nonlinear strain representations, the shape-prediction accuracies could be greatly improved, especially for highly-tapered cantilever tubular beams.
Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation
NASA Astrophysics Data System (ADS)
Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin
2018-04-01
Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.
MODELING FUNCTIONALLY GRADED INTERPHASE REGIONS IN CARBON NANOTUBE REINFORCED COMPOSITES
NASA Technical Reports Server (NTRS)
Seidel, G. D.; Lagoudas, D. C.; Frankland, S. J. V.; Gates, T. S.
2006-01-01
A combination of micromechanics methods and molecular dynamics simulations are used to obtain the effective properties of the carbon nanotube reinforced composites with functionally graded interphase regions. The multilayer composite cylinders method accounts for the effects of non-perfect load transfer in carbon nanotube reinforced polymer matrix composites using a piecewise functionally graded interphase. The functional form of the properties in the interphase region, as well as the interphase thickness, is derived from molecular dynamics simulations of carbon nanotubes in a polymer matrix. Results indicate that the functional form of the interphase can have a significant effect on all the effective elastic constants except for the effective axial modulus for which no noticeable effects are evident.
Lyapunov vector function method in the motion stabilisation problem for nonholonomic mobile robot
NASA Astrophysics Data System (ADS)
Andreev, Aleksandr; Peregudova, Olga
2017-07-01
In this paper we propose a sampled-data control law in the stabilisation problem of nonstationary motion of nonholonomic mobile robot. We assume that the robot moves on a horizontal surface without slipping. The dynamical model of a mobile robot is considered. The robot has one front free wheel and two rear wheels which are controlled by two independent electric motors. We assume that the controls are piecewise constant signals. Controller design relies on the backstepping procedure with the use of Lyapunov vector-function method. Theoretical considerations are verified by numerical simulation.
Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients
Kolev, Tzanio V.; Xu, Jinchao; Zhu, Yunrong
2015-08-23
In this study, we extend some of the multilevel convergence results obtained by Xu and Zhu, to the case of second order linear reaction-diffusion equations. Specifically, we consider the multilevel preconditioners for solving the linear systems arising from the linear finite element approximation of the problem, where both diffusion and reaction coefficients are piecewise-constant functions. We discuss in detail the influence of both the discontinuous reaction and diffusion coefficients to the performance of the classical BPX and multigrid V-cycle preconditioner.
Robust and Quantized Wiener Filters for p-Point Spectral Classes.
1980-01-01
REPORT DOCUMENTATION, __BEFORE COMPLETING FORM A. REPORT NUMBER ’ 12. GOVT ACCESSION NO. 3 . RECIPIENT’S CATALOG NUMBER AFOSR-TR- 80-0425z__...re School of Electrical Engineerin . 3 - , Philadelphia, PA 19104 ABSTRACT In Section III, we show that a piecewise const- ant filter also possesses...determining the optimum piecewise ters using a band-model for the PSD’s. Poor [ 3 , 4] constant filter. Then, for a particular class of then considered
NASA Technical Reports Server (NTRS)
Childs, A. G.
1971-01-01
A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.
Ke, Jing; Dou, Hanfei; Zhang, Ximin; Uhagaze, Dushimabararezi Serge; Ding, Xiali; Dong, Yuming
2016-12-01
As a mono-sodium salt form of alendronic acid, alendronate sodium presents multi-level ionization for the dissociation of its four hydroxyl groups. The dissociation constants of alendronate sodium were determined in this work by studying the piecewise linear relationship between volume of titrant and pH value based on acid-base potentiometric titration reaction. The distribution curves of alendronate sodium were drawn according to the determined pKa values. There were 4 dissociation constants (pKa 1 =2.43, pKa 2 =7.55, pKa 3 =10.80, pKa 4 =11.99, respectively) of alendronate sodium, and 12 existing forms, of which 4 could be ignored, existing in different pH environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehtikangas, O., E-mail: Ossi.Lehtikangas@uef.fi; Tarvainen, T.; Department of Computer Science, University College London, Gower Street, London WC1E 6BT
2015-02-01
The radiative transport equation can be used as a light transport model in a medium with scattering particles, such as biological tissues. In the radiative transport equation, the refractive index is assumed to be constant within the medium. However, in biomedical media, changes in the refractive index can occur between different tissue types. In this work, light propagation in a medium with piece-wise constant refractive index is considered. Light propagation in each sub-domain with a constant refractive index is modeled using the radiative transport equation and the equations are coupled using boundary conditions describing Fresnel reflection and refraction phenomena onmore » the interfaces between the sub-domains. The resulting coupled system of radiative transport equations is numerically solved using a finite element method. The approach is tested with simulations. The results show that this coupled system describes light propagation accurately through comparison with the Monte Carlo method. It is also shown that neglecting the internal changes of the refractive index can lead to erroneous boundary measurements of scattered light.« less
Computation of the anharmonic orbits in two piecewise monotonic maps with a single discontinuity
NASA Astrophysics Data System (ADS)
Li, Yurong; Du, Zhengdong
2017-02-01
In this paper, the bifurcation values for two typical piecewise monotonic maps with a single discontinuity are computed. The variation of the parameter of those maps leads to a sequence of border-collision and period-doubling bifurcations, generating a sequence of anharmonic orbits on the boundary of chaos. The border-collision and period-doubling bifurcation values are computed by the word-lifting technique and the Maple fsolve function or the Newton-Raphson method, respectively. The scaling factors which measure the convergent rates of the bifurcation values and the width of the stable periodic windows, respectively, are investigated. We found that these scaling factors depend on the parameters of the maps, implying that they are not universal. Moreover, if one side of the maps is linear, our numerical results suggest that those quantities converge increasingly. In particular, for the linear-quadratic case, they converge to one of the Feigenbaum constants δ _F= 4.66920160\\cdots.
NASA Astrophysics Data System (ADS)
Bauer, Werner; Behrens, Jörn
2017-04-01
We present a locally conservative, low-order finite element (FE) discretization of the covariant 1D linear shallow-water equations written in split form (cf. tet{[1]}). The introduction of additional differential forms (DF) that build pairs with the original ones permits a splitting of these equations into topological momentum and continuity equations and metric-dependent closure equations that apply the Hodge-star. Our novel discretization framework conserves this geometrical structure, in particular it provides for all DFs proper FE spaces such that the differential operators (here gradient and divergence) hold in strong form. The discrete topological equations simply follow by trivial projections onto piecewise constant FE spaces without need to partially integrate. The discrete Hodge-stars operators, representing the discretized metric equations, are realized by nontrivial Galerkin projections (GP). Here they follow by projections onto either a piecewise constant (GP0) or a piecewise linear (GP1) space. Our framework thus provides essentially three different schemes with significantly different behavior. The split scheme using twice GP1 is unstable and shares the same discrete dispersion relation and similar second-order convergence rates as the conventional P1-P1 FE scheme that approximates both velocity and height variables by piecewise linear spaces. The split scheme that applies both GP1 and GP0 is stable and shares the dispersion relation of the conventional P1-P0 FE scheme that approximates the velocity by a piecewise linear and the height by a piecewise constant space with corresponding second- and first-order convergence rates. Exhibiting for both velocity and height fields second-order convergence rates, we might consider the split GP1-GP0 scheme though as stable versions of the conventional P1-P1 FE scheme. For the split scheme applying twice GP0, we are not aware of a corresponding conventional formulation to compare with. Though exhibiting larger absolute error values, it shows similar convergence rates as the other split schemes, but does not provide a satisfactory approximation of the dispersion relation as short waves are propagated much to fast. Despite this, the finding of this new scheme illustrates the potential of our discretization framework as a toolbox to find and to study new FE schemes based on new combinations of FE spaces. [1] Bauer, W. [2016], A new hierarchically-structured n-dimensional covariant form of rotating equations of geophysical fluid dynamics, GEM - International Journal on Geomathematics, 7(1), 31-101.
A simple finite element method for the Stokes equations
Mu, Lin; Ye, Xiu
2017-03-21
The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.
A simple finite element method for the Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
Statistical methods for investigating quiescence and other temporal seismicity patterns
Matthews, M.V.; Reasenberg, P.A.
1988-01-01
We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Wang, Qingzhi; Tan, Guanzheng; He, Yong; Wu, Min
2017-10-01
This paper considers a stability analysis issue of piecewise non-linear systems and applies it to intermittent synchronisation of chaotic systems. First, based on piecewise Lyapunov function methods, more general and less conservative stability criteria of piecewise non-linear systems in periodic and aperiodic cases are presented, respectively. Next, intermittent synchronisation conditions of chaotic systems are derived which extend existing results. Finally, Chua's circuit is taken as an example to verify the validity of our methods.
H∞ control problem of linear periodic piecewise time-delay systems
NASA Astrophysics Data System (ADS)
Xie, Xiaochen; Lam, James; Li, Panshuo
2018-04-01
This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.
A tutorial on the piecewise regression approach applied to bedload transport data
Sandra E. Ryan; Laurie S. Porth
2007-01-01
This tutorial demonstrates the application of piecewise regression to bedload data to define a shift in phase of transport so that the reader may perform similar analyses on available data. The use of piecewise regression analysis implicitly recognizes different functions fit to bedload data over varying ranges of flow. The transition from primarily low rates of sand...
Modifications of the PCPT method for HJB equations
NASA Astrophysics Data System (ADS)
Kossaczký, I.; Ehrhardt, M.; Günther, M.
2016-10-01
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
NASA Astrophysics Data System (ADS)
Liang, Feng; Wang, Dechang
In this paper, we suppose that a planar piecewise Hamiltonian system, with a straight line of separation, has a piecewise generalized homoclinic loop passing through a Saddle-Fold point, and assume that there exists a family of piecewise smooth periodic orbits near the loop. By studying the asymptotic expansion of the first order Melnikov function corresponding to the period annulus, we obtain the formulas of the first six coefficients in the expansion, based on which, we provide a lower bound for the maximal number of limit cycles bifurcated from the period annulus. As applications, two concrete systems are considered. Especially, the first one reveals that a quadratic piecewise Hamiltonian system can have five limit cycles near a generalized homoclinic loop under a quadratic piecewise smooth perturbation. Compared with the smooth case [Horozov & Iliev, 1994; Han et al., 1999], three more limit cycles are found.
Fault detection for piecewise affine systems with application to ship propulsion systems.
Yang, Ying; Linlin, Li; Ding, Steven X; Qiu, Jianbin; Peng, Kaixiang
2017-09-09
In this paper, the design approach of non-synchronized diagnostic observer-based fault detection (FD) systems is investigated for piecewise affine processes via continuous piecewise Lyapunov functions. Considering that the dynamics of piecewise affine systems in different regions can be considerably different, the weighting matrices are used to weight the residual of each region, so as to optimize the fault detectability. A numerical example and a case study on a ship propulsion system are presented in the end to demonstrate the effectiveness of the proposed results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Control of mechanical systems by the mixed "time and expenditure" criterion
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
The optimal controlled motion of a mechanical system, that is determined by the linear system ODE with constant coefficients and piecewise constant control components, is considered. The number of control switching points and the heights of control steps are considered as preset. The optimized functional is combination of classical time criteria and "Expenditure criteria", that is equal to the total area of all steps of all control components. In the absence of control, the solution of the system is equal to the sum of components (frequency components) corresponding to different eigenvalues of the matrix of the ODE system. Admissible controls are those that turn to zero (at a non predetermined time moment) the previously chosen frequency components of the solution. An algorithm for the finding of control switching points, based on the necessary minimum conditions for mixed criteria, is proposed.
NASA Astrophysics Data System (ADS)
Vjačeslavov, N. S.
1980-02-01
In this paper estimates are found for L_pR_n(f) - the least deviation in the L_p-metric, 0 < p\\leq\\infty, of a piecewise analytic function f from the rational functions of degree at most n. It is shown that these estimates are sharp in a well-defined sense.Bibliography: 12 titles.
Cubic Zig-Zag Enrichment of the Classical Kirchhoff Kinematics for Laminated and Sandwich Plates
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2012-01-01
A detailed anaylsis and examples are presented that show how to enrich the kinematics of classical Kirchhoff plate theory by appending them with a set of continuous piecewise-cubic functions. This analysis is used to obtain functions that contain the effects of laminate heterogeneity and asymmetry on the variations of the inplane displacements and transverse shearing stresses, for use with a {3, 0} plate theory in which these distributions are specified apriori. The functions used for the enrichment are based on the improved zig-zag plate theory presented recently by Tessler, Di Scuva, and Gherlone. With the approach presented herein, the inplane displacements are represented by a set of continuous piecewise-cubic functions, and the transverse shearing stresses and strains are represented by a set of piecewise-quadratic functions that are discontinuous at the ply interfaces.
Weinmann, Andreas; Storath, Martin
2015-01-01
Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074
Hamiltonian flows with random-walk behaviour originating from zero-sum games and fictitious play
NASA Astrophysics Data System (ADS)
van Strien, Sebastian
2011-06-01
In this paper we introduce Hamiltonian dynamics, inspired by zero-sum games (best response and fictitious play dynamics). The Hamiltonian functions we consider are continuous and piecewise affine (and of a very simple form). It follows that the corresponding Hamiltonian vector fields are discontinuous and multi-valued. Differential equations with discontinuities along a hyperplane are often called 'Filippov systems', and there is a large literature on such systems, see for example (di Bernardo et al 2008 Theory and applications Piecewise-Smooth Dynamical Systems (Applied Mathematical Sciences vol 163) (London: Springer); Kunze 2000 Non-Smooth Dynamical Systems (Lecture Notes in Mathematics vol 1744) (Berlin: Springer); Leine and Nijmeijer 2004 Dynamics and Bifurcations of Non-smooth Mechanical Systems (Lecture Notes in Applied and Computational Mechanics vol 18) (Berlin: Springer)). The special feature of the systems we consider here is that they have discontinuities along a large number of intersecting hyperplanes. Nevertheless, somewhat surprisingly, the flow corresponding to such a vector field exists, is unique and continuous. We believe that these vector fields deserve attention, because it turns out that the resulting dynamics are rather different from those found in more classically defined Hamiltonian dynamics. The vector field is extremely simple: outside codimension-one hyperplanes it is piecewise constant and so the flow phit piecewise a translation (without stationary points). Even so, the dynamics can be rather rich and complicated as a detailed study of specific examples show (see for example theorems 7.1 and 7.2 and also (Ostrovski and van Strien 2011 Regular Chaotic Dynf. 16 129-54)). In the last two sections of the paper we give some applications to game theory, and finish with posing a version of the Palis conjecture in the context of the class of non-smooth systems studied in this paper. To Jacob Palis on his 70th birthday.
Use of autocorrelation scanning in DNA copy number analysis.
Zhang, Liangcai; Zhang, Li
2013-11-01
Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1994-01-01
A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
Observational constraint on dynamical evolution of dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Yungui; Cai, Rong-Gen; Chen, Yun
2010-01-01
We use the Constitution supernova, the baryon acoustic oscillation, the cosmic microwave background, and the Hubble parameter data to analyze the evolution property of dark energy. We obtain different results when we fit different baryon acoustic oscillation data combined with the Constitution supernova data to the Chevallier-Polarski-Linder model. We find that the difference stems from the different values of Ω{sub m0}. We also fit the observational data to the model independent piecewise constant parametrization. Four redshift bins with boundaries at z = 0.22, 0.53, 0.85 and 1.8 were chosen for the piecewise constant parametrization of the equation of state parametermore » w(z) of dark energy. We find no significant evidence for evolving w(z). With the addition of the Hubble parameter, the constraint on the equation of state parameter at high redshift is improved by 70%. The marginalization of the nuisance parameter connected to the supernova distance modulus is discussed.« less
Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir
2017-01-01
In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566
NASA Astrophysics Data System (ADS)
Nakae, T.; Ryu, T.; Matsuzaki, K.; Rosbi, S.; Sueoka, A.; Takikawa, Y.; Ooi, Y.
2016-09-01
In the torque converter, the damper of the lock-up clutch is used to effectively absorb the torsional vibration. The damper is designed using a piecewise-linear spring with three stiffness stages. However, a nonlinear vibration, referred to as a subharmonic vibration of order 1/2, occurred around the switching point in the piecewise-linear restoring torque characteristics because of the nonlinearity. In the present study, we analyze vibration reduction for subharmonic vibration. The model used herein includes the torque converter, the gear train, and the differential gear. The damper is modeled by a nonlinear rotational spring of the piecewise-linear spring. We focus on the optimum design of the spring characteristics of the damper in order to suppress the subharmonic vibration. A piecewise-linear spring with five stiffness stages is proposed, and the effect of the distance between switching points on the subharmonic vibration is investigated. The results of our analysis indicate that the subharmonic vibration can be suppressed by designing a damper with five stiffness stages to have a small spring constant ratio between the neighboring springs. The distances between switching points must be designed to be large enough that the amplitude of the main frequency component of the systems does not reach the neighboring switching point.
NASA Astrophysics Data System (ADS)
Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.
2017-11-01
We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Exponentially accurate approximations to piece-wise smooth periodic functions
NASA Technical Reports Server (NTRS)
Greer, James; Banerjee, Saheb
1995-01-01
A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.
Limit Cycle Bifurcations by Perturbing a Piecewise Hamiltonian System with a Double Homoclinic Loop
NASA Astrophysics Data System (ADS)
Xiong, Yanqin
2016-06-01
This paper is concerned with the bifurcation problem of limit cycles by perturbing a piecewise Hamiltonian system with a double homoclinic loop. First, the derivative of the first Melnikov function is provided. Then, we use it, together with the analytic method, to derive the asymptotic expansion of the first Melnikov function near the loop. Meanwhile, we present the first coefficients in the expansion, which can be applied to study the limit cycle bifurcation near the loop. We give sufficient conditions for this system to have 14 limit cycles in the neighborhood of the loop. As an application, a piecewise polynomial Liénard system is investigated, finding six limit cycles with the help of the obtained method.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
ERIC Educational Resources Information Center
Sinclair, Nathalie; Armstrong, Alayne
2011-01-01
Piecewise linear functions and story graphs are concepts usually associated with algebra, but in the authors' classroom, they found success teaching this topic in a distinctly geometrical manner. The focus of the approach was less on learning geometric concepts and more on using spatial and kinetic reasoning. It not only supports the learning of…
Application of Markov Models for Analysis of Development of Psychological Characteristics
ERIC Educational Resources Information Center
Kuravsky, Lev S.; Malykh, Sergey B.
2004-01-01
A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules
He, Peng; Wei, Biao; Wang, Steve; ...
2013-01-01
Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT) theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI) from truncated data in amore » theoretically exact fashion via the total variation (TV) minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP) reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen.« less
Puso, M. A.; Kokko, E.; Settgast, R.; ...
2014-10-22
An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh
2011-03-15
Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing basedmore » interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of {approx}500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.« less
Staley, James R; Burgess, Stephen
2017-05-01
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
Bardhan, Jaydeep P; Jungwirth, Pavel; Makowski, Lee
2012-09-28
Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular "linear response" model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution).
Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee
2012-01-01
Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318
Wang, Chunhua; Liu, Xiaoming; Xia, Hu
2017-03-01
In this paper, two kinds of novel ideal active flux-controlled smooth multi-piecewise quadratic nonlinearity memristors with multi-piecewise continuous memductance function are presented. The pinched hysteresis loop characteristics of the two memristor models are verified by building a memristor emulator circuit. Using the two memristor models establish a new memristive multi-scroll Chua's circuit, which can generate 2N-scroll and 2N+1-scroll chaotic attractors without any other ordinary nonlinear function. Furthermore, coexisting multi-scroll chaotic attractors are found in the proposed memristive multi-scroll Chua's circuit. Phase portraits, Lyapunov exponents, bifurcation diagrams, and equilibrium point analysis have been used to research the basic dynamics of the memristive multi-scroll Chua's circuit. The consistency of circuit implementation and numerical simulation verifies the effectiveness of the system design.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Quadratic spline subroutine package
Rasmussen, Lowell A.
1982-01-01
A continuous piecewise quadratic function with continuous first derivative is devised for approximating a single-valued, but unknown, function represented by a set of discrete points. The quadratic is proposed as a treatment intermediate between using the angular (but reliable, easily constructed and manipulated) piecewise linear function and using the smoother (but occasionally erratic) cubic spline. Neither iteration nor the solution of a system of simultaneous equations is necessary to determining the coefficients. Several properties of the quadratic function are given. A set of five short FORTRAN subroutines is provided for generating the coefficients (QSC), finding function value and derivatives (QSY), integrating (QSI), finding extrema (QSE), and computing arc length and the curvature-squared integral (QSK). (USGS)
Two-body loss rates for reactive collisions of cold atoms
NASA Astrophysics Data System (ADS)
Cop, C.; Walser, R.
2018-01-01
We present an effective two-channel model for reactive collisions of cold atoms. It augments elastic molecular channels with an irreversible, inelastic loss channel. Scattering is studied with the distorted-wave Born approximation and yields general expressions for angular momentum resolved cross sections as well as two-body loss rates. Explicit expressions are obtained for piecewise constant potentials. A pole expansion reveals simple universal shape functions for cross sections and two-body loss rates in agreement with the Wigner threshold laws. This is applied to collisions of metastable 20Ne and 21Ne atoms, which decay primarily through exothermic Penning or associative ionization processes. From a numerical solution of the multichannel Schrödinger equation using the best currently available molecular potentials, we have obtained synthetic scattering data. Using the two-body loss shape functions derived in this paper, we can match these scattering data very well.
Optimal Portfolio Selection Under Concave Price Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
Existence of almost periodic solutions for forced perturbed systems with piecewise constant argument
NASA Astrophysics Data System (ADS)
Xia, Yonghui; Huang, Zhenkun; Han, Maoan
2007-09-01
Certain almost periodic forced perturbed systems with piecewise argument are considered in this paper. By using the contraction mapping principle and some new analysis technique, some sufficient conditions are obtained for the existence and uniqueness of almost periodic solution of these systems. Furthermore, we study the harmonic and subharmonic solutions of these systems. The obtained results generalize the previous known results such as [A.M. Fink, Almost Periodic Differential Equation, Lecture Notes in Math., volE 377, Springer-Verlag, Berlin, 1974; C.Y. He, Almost Periodic Differential Equations, Higher Education Press, Beijing, 1992 (in Chinese); Z.S. Lin, The existence of almost periodic solution of linear system, Acta Math. Sinica 22 (5) (1979) 515-528 (in Chinese); C.Y. He, Existence of almost periodic solutions of perturbation systems, Ann. Differential Equations 9 (2) (1992) 173-181; Y.H. Xia, M. Lin, J. Cao, The existence of almost periodic solutions of certain perturbation system, J. Math. Anal. Appl. 310 (1) (2005) 81-96]. Finally, a tangible example and its numeric simulations show the feasibility of our results, the comparison between non-perturbed system and perturbed system, the relation between systems with and without piecewise argument.
Comparison between PVI2D and Abreu–Johnson’s Model for Petroleum Vapor Intrusion Assessment
Yao, Yijun; Wang, Yue; Verginelli, Iason; Suuberg, Eric M.; Ye, Jianfeng
2018-01-01
Recently, we have developed a two-dimensional analytical petroleum vapor intrusion model, PVI2D (petroleum vapor intrusion, two-dimensional), which can help users to easily visualize soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, reaction rate constant, soil characteristics, and building features. In this study, we made a full comparison of the results returned by PVI2D and those obtained using Abreu and Johnson’s three-dimensional numerical model (AJM). These comparisons, examined as a function of the source strength, source depth, and reaction rate constant, show that PVI2D can provide similar soil gas concentration profiles and source-to-indoor air attenuation factors (within one order of magnitude difference) as those by the AJM. The differences between the two models can be ascribed to some simplifying assumptions used in PVI2D and to some numerical limitations of the AJM in simulating strictly piecewise aerobic biodegradation and no-flux boundary conditions. Overall, the obtained results show that for cases involving homogenous source and soil, PVI2D can represent a valid alternative to more rigorous three-dimensional numerical models. PMID:29398981
A piecewise mass-spring-damper model of the human breast.
Cai, Yiqing; Chen, Lihua; Yu, Winnie; Zhou, Jie; Wan, Frances; Suh, Minyoung; Chow, Daniel Hung-Kay
2018-01-23
Previous models to predict breast movement whilst performing physical activities have, erroneously, assumed uniform elasticity within the breast. Consequently, the predicted displacements have not yet been satisfactorily validated. In this study, real time motion capture of the natural vibrations of a breast that followed, after raising and allowing it to fall freely, revealed an obvious difference in the vibration characteristics above and below the static equilibrium position. This implied that the elastic and viscous damping properties of a breast could vary under extension or compression. Therefore, a new piecewise mass-spring-damper model of a breast was developed with theoretical equations to derive values for its spring constants and damping coefficients from free-falling breast experiments. The effective breast mass was estimated from the breast volume extracted from a 3D body scanned image. The derived spring constant (k a = 73.5 N m -1 ) above the static equilibrium position was significantly smaller than that below it (k b = 658 N m -1 ), whereas the respective damping coefficients were similar (c a = 1.83 N s m -1 , c b = 2.07 N s m -1 ). These values were used to predict the nipple displacement during bare-breasted running for validation. The predicted and experimental results had a 2.6% or less root-mean-square-error of the theoretical and experimental amplitudes, so the piecewise mass-spring-damper model and equations were considered to have been successfully validated. This provides a theoretical basis for further research into the dynamic, nonlinear viscoelastic properties of different breasts and the prediction of external forces for the necessary breast support during different sports activities. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Ensemble of Neural Networks for Stock Trading Decision Making
NASA Astrophysics Data System (ADS)
Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming
Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2013-01-01
Large deformation displacement transfer functions were formulated for deformed shape predictions of highly flexible slender structures like aircraft wings. In the formulation, the embedded beam (depth wise cross section of structure along the surface strain sensing line) was first evenly discretized into multiple small domains, with surface strain sensing stations located at the domain junctures. Thus, the surface strain (bending strains) variation within each domain could be expressed with linear of nonlinear function. Such piecewise approach enabled piecewise integrations of the embedded beam curvature equations [classical (Eulerian), physical (Lagrangian), and shifted curvature equations] to yield closed form slope and deflection equations in recursive forms.
Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis
Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel
2013-01-01
This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran; Lung, Shun-Fat
2017-01-01
For shape predictions of structures under large geometrically nonlinear deformations, Curved Displacement Transfer Functions were formulated based on a curved displacement, traced by a material point from the undeformed position to deformed position. The embedded beam (depth-wise cross section of a structure along a surface strain-sensing line) was discretized into multiple small domains, with domain junctures matching the strain-sensing stations. Thus, the surface strain distribution could be described with a piecewise linear or a piecewise nonlinear function. The discretization approach enabled piecewise integrations of the embedded-beam curvature equations to yield the Curved Displacement Transfer Functions, expressed in terms of embedded beam geometrical parameters and surface strains. By entering the surface strain data into the Displacement Transfer Functions, deflections along each embedded beam can be calculated at multiple points for mapping the overall structural deformed shapes. Finite-element linear and nonlinear analyses of a tapered cantilever tubular beam were performed to generate linear and nonlinear surface strains and the associated deflections to be used for validation. The shape prediction accuracies were then determined by comparing the theoretical deflections with the finiteelement- generated deflections. The results show that the newly developed Curved Displacement Transfer Functions are very accurate for shape predictions of structures under large geometrically nonlinear deformations.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Perturbations of Jacobi polynomials and piecewise hypergeometric orthogonal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neretin, Yu A
2006-12-31
A family of non-complete orthogonal systems of functions on the ray [0,{infinity}] depending on three real parameters {alpha}, {beta}, {theta} is constructed. The elements of this system are piecewise hypergeometric functions with singularity at x=1. For {theta}=0 these functions vanish on [1,{infinity}) and the system is reduced to the Jacobi polynomials P{sub n}{sup {alpha}}{sup ,{beta}} on the interval [0,1]. In the general case the functions constructed can be regarded as an interpretation of the expressions P{sub n+{theta}}{sup {alpha}}{sup ,{beta}}. They are eigenfunctions of an exotic Sturm-Liouville boundary-value problem for the hypergeometric differential operator. The spectral measure for this problem ismore » found.« less
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Modeling of electrical capacitance tomography with the use of complete electrode model
NASA Astrophysics Data System (ADS)
Fang, Weifu
2016-10-01
We introduce the complete electrode model in the modeling of electrical capacitance tomography (ECT), which extends the model with the commonly used model for electrodes. We show that the solution of the complete electrode model approaches the solution of the corresponding common electrode model as the impedance effect on the electrodes vanishes. We also derive the nonlinear relation between capacitance and permitivity and the sensitivity maps with respect to both the permittivity and the impedance constants, and present a finite difference scheme in polar coordinates for the case of circular ECT sensors that retains the continuity of displacement current with piecewise-constant permitivities.
Separated Component-Based Restoration of Speckled SAR Images
2014-01-01
One of the simplest approaches for speckle noise reduction is known as multi-look processing. It involves non-coherently summing the independent...image is assumed to be piecewise smooth [21], [22], [23]. It has been shown that TV regular- ization often yields images with the stair -casing effect...as a function f , is to be decomposed into a sum of two components f = u+ v, where u represents the cartoon or geometric (i.e. piecewise smooth
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
Theodorakis, Stavros
2003-06-01
We emulate the cubic term Psi(3) in the nonlinear Schrödinger equation by a piecewise linear term, thus reducing the problem to a set of uncoupled linear inhomogeneous differential equations. The resulting analytic expressions constitute an excellent approximation to the exact solutions, as is explicitly shown in the case of the kink, the vortex, and a delta function trap. Such a piecewise linear emulation can be used for any differential equation where the only nonlinearity is a Psi(3) one. In particular, it can be used for the nonlinear Schrödinger equation in the presence of harmonic traps, giving analytic Bose-Einstein condensate solutions that reproduce very accurately the numerically calculated ones in one, two, and three dimensions.
NASA Astrophysics Data System (ADS)
Goryk, A. V.; Koval'chuk, S. B.
2018-05-01
An exact elasticity theory solution for the problem on plane bending of a narrow layered composite cantilever beam by tangential and normal loads distributed on its free end is presented. Components of the stress-strain state are found for the whole layers package by directly integrating differential equations of the plane elasticity theory problem by using an analytic representation of piecewise constant functions of the mechanical characteristics of layer materials. The continuous solution obtained is realized for a four-layer beam with account of kinematic boundary conditions simulating the rigid fixation of its one end. The solution obtained allows one to predict the strength and stiffness of composite cantilever beams and to construct applied analytical solutions for various problems on the elastic bending of layered beams.
Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)
NASA Astrophysics Data System (ADS)
Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya
2017-08-01
Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.
Concentric layered Hermite scatterers
NASA Astrophysics Data System (ADS)
Astheimer, Jeffrey P.; Parker, Kevin J.
2018-05-01
The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.
Building an Understanding of Functions: A Series of Activities for Pre-Calculus
ERIC Educational Resources Information Center
Carducci, Olivia M.
2008-01-01
Building block toys can be used to illustrate various concepts connected with functions including graphs and rates of change of linear and exponential functions, piecewise functions, and composition of functions. Five brief activities suitable for a pre-calculus course are described.
Balance Contrast Enhancement using piecewise linear stretching
NASA Astrophysics Data System (ADS)
Rahavan, R. V.; Govil, R. C.
1993-04-01
Balance Contrast Enhancement is one of the techniques employed to produce color composites with increased color contrast. It equalizes the three images used for color composition in range and mean. This results in a color composite with large variation in hue. Here, it is shown that piecewise linear stretching can be used for performing the Balance Contrast Enhancement. In comparison with the Balance Contrast Enhancement Technique using parabolic segment as transfer function (BCETP), the method presented here is algorithmically simple, constraint-free and produces comparable results.
1984-07-01
piecewise constant energy dependence. This is a seven-dimensional problem with time dependence, three spatial and two angular or directional variables and...in extending the computer implementation of the method to time and energy dependent problems, and to solving and validating this technique on a...problems they have severe limitations. The Monte Carlo method, usually requires the use of many hours of expensive computer time , and for deep
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
Curvature and frontier orbital energies in density functional theory
NASA Astrophysics Data System (ADS)
Kronik, Leeor; Stein, Tamar; Autschbach, Jochen; Govind, Niranjan; Baer, Roi
2013-03-01
Perdew et al. [Phys. Rev. Lett 49, 1691 (1982)] discovered and proved two different properties of exact Kohn-Sham density functional theory (DFT): (i) The exact total energy versus particle number is a series of linear segments between integer electron points; (ii) Across an integer number of electrons, the exchange-correlation potential may ``jump'' by a constant, known as the derivative discontinuity (DD). Here, we show analytically that in both the original and the generalized Kohn-Sham formulation of DFT, the two are in fact two sides of the same coin. Absence of a derivative discontinuity necessitates deviation from piecewise linearity, and the latter can be used to correct for the former, thereby restoring the physical meaning of the orbital energies. Using selected small molecules, we show that this results in a simple correction scheme for any underlying functional, including semi-local and hybrid functionals as well as Hartree-Fock theory, suggesting a practical correction for the infamous gap problem of DFT. Moreover, we show that optimally-tuned range-separated hybrid functionals can inherently minimize both DD and curvature, thus requiring no correction, and show that this can be used as a sound theoretical basis for novel tuning strategies.
Topics in electromagnetic, acoustic, and potential scattering theory
NASA Astrophysics Data System (ADS)
Nuntaplook, Umaporn
With recent renewed interest in the classical topics of both acoustic and electromagnetic aspects for nano-technology, transformation optics, fiber optics, metamaterials with negative refractive indices, cloaking and invisibility, the topic of time-independent scattering theory in quantum mechanics is becoming a useful field to re-examine in the above contexts. One of the key areas of electromagnetic theory scattering of plane electromagnetic waves --- is based on the properties of the refractive indices in the various media. It transpires that the refractive index of a medium and the potential in quantum scattering theory are intimately related. In many cases, understanding such scattering in radially symmetric media is sufficient to gain insight into scattering in more complex media. Meeting the challenge of variable refractive indices and possibly complicated boundary conditions therefore requires accurate and efficient numerical methods, and where possible, analytic solutions to the radial equations from the governing scalar and vector wave equations (in acoustics and electromagnetic theory, respectively). Until relatively recently, researchers assumed a constant refractive index throughout the medium of interest. However, the most interesting and increasingly useful cases are those with non-constant refractive index profiles. In the majority of this dissertation the focus is on media with piecewise constant refractive indices in radially symmetric media. The method discussed is based on the solution of Maxwell's equations for scattering of plane electromagnetic waves from a dielectric (or "transparent") sphere in terms of the related Helmholtz equation. The main body of the dissertation (Chapters 2 and 3) is concerned with scattering from (i) a uniform spherical inhomogeneity embedded in an external medium with different properties, and (ii) a piecewise-uniform central inhomogeneity in the external medium. The latter results contain a natural generalization of the former (previously known) results. The link with time-independent quantum mechanical scattering, via morphology-dependent resonances (MDRs), is discussed in Chapter 2. This requires a generalization of the classical problem for scattering of a plane wave from a uniform spherically-symmetric inhomogeneity (in which the velocity of propagation is a function only of the radial coordinate r. i.e.. c = c(r)) to a piecewise-uniform inhomogeneity. In Chapter 3 the Jost-function formulation of potential scattering theory is used to solve the radial differential equation for scattering which can be converted into an integral equation corresponding via the Jost boundary conditions. The first two iterations for the zero angular momentum case l = 0 are provided for both two-layer and three-layer models. It is found that the iterative technique is most useful for long wavelengths and sufficiently small ratios of interior and exterior wavenumbers. Exact solutions are also provided for these cases. In Chapter 4 the time-independent quantum mechanical 'connection' is exploited further by generalizing previous work on a spherical well potential to the case where a delta 'function' potential is appended to the exterior of the well (for l ≠ 0). This corresponds to an idealization of the former approach to the case of a 'coated sphere'. The poles of the associated 'S-matrix' are important in this regard, since they correspond directly with the morphology-dependent resonances discussed in Chapter 2. These poles (for the l = 0 case, to compare with Nussenzveig's analysis) are tracked in the complex wavenumber plane as the strength of the delta function potential changes. Finally, a set of 4 Appendices is provided to clarify some of the connections between (i) the scattering of acoustic/electromagnetic waves from a penetrable/dielectric sphere and (ii) time-independent potential scattering theory in quantum mechanics. This, it is hoped, will be the subject of future work.
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-01-01
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-12-07
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
On piecewise interpolation techniques for estimating solar radiation missing values in Kedah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
2014-12-04
This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less
Modeling the human as a controller in a multitask environment
NASA Technical Reports Server (NTRS)
Govindaraj, T.; Rouse, W. B.
1978-01-01
Modeling the human as a controller of slowly responding systems with preview is considered. Along with control tasks, discrete noncontrol tasks occur at irregular intervals. In multitask situations such as these, it has been observed that humans tend to apply piecewise constant controls. It is believed that the magnitude of controls and the durations for which they remain constant are dependent directly on the system bandwidth, preview distance, complexity of the trajectory to be followed, and nature of the noncontrol tasks. A simple heuristic model of human control behavior in this situation is presented. The results of a simulation study, whose purpose was determination of the sensitivity of the model to its parameters, are discussed.
Parameterizations for ensemble Kalman inversion
NASA Astrophysics Data System (ADS)
Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.
2018-05-01
The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.
Primal-mixed formulations for reaction-diffusion systems on deforming domains
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo
2015-10-01
We propose a finite element formulation for a coupled elasticity-reaction-diffusion system written in a fully Lagrangian form and governing the spatio-temporal interaction of species inside an elastic, or hyper-elastic body. A primal weak formulation is the baseline model for the reaction-diffusion system written in the deformed domain, and a finite element method with piecewise linear approximations is employed for its spatial discretization. On the other hand, the strain is introduced as mixed variable in the equations of elastodynamics, which in turn acts as coupling field needed to update the diffusion tensor of the modified reaction-diffusion system written in a deformed domain. The discrete mechanical problem yields a mixed finite element scheme based on row-wise Raviart-Thomas elements for stresses, Brezzi-Douglas-Marini elements for displacements, and piecewise constant pressure approximations. The application of the present framework in the study of several coupled biological systems on deforming geometries in two and three spatial dimensions is discussed, and some illustrative examples are provided and extensively analyzed.
Computer Models of Underwater Acoustic Propagation.
1980-01-02
deterministic propagation loss result. Development of a model for the more general problem is required, as evidenced by the trends in future sonar designs ...air. The water column itself is treated as an ideal fluid incapable of supporting showr stresses and having a uniform or, at most, piecewise constant...evaluated at any depth (zs 4 z -zN). The layer in which the source is located will be designated by LS and the receiver layer by LR. The depth dependent
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.
NASA Astrophysics Data System (ADS)
Loridan, V.; Ripoll, J. F.; De Vuyst, F.
2017-12-01
Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.
NASA Astrophysics Data System (ADS)
Hilbert, Stefan; Dunkel, Jörn
2006-07-01
We calculate exactly both the microcanonical and canonical thermodynamic functions (TDFs) for a one-dimensional model system with piecewise constant Lennard-Jones type pair interactions. In the case of an isolated N -particle system, the microcanonical TDFs exhibit (N-1) singular (nonanalytic) microscopic phase transitions of the formal order N/2 , separating N energetically different evaporation (dissociation) states. In a suitably designed evaporation experiment, these types of phase transitions should manifest themselves in the form of pressure and temperature oscillations, indicating cooling by evaporation. In the presence of a heat bath (thermostat), such oscillations are absent, but the canonical heat capacity shows a characteristic peak, indicating the temperature-induced dissociation of the one-dimensional chain. The distribution of complex zeros of the canonical partition may be used to identify different degrees of dissociation in the canonical ensemble.
Electrostatic forces in the Poisson-Boltzmann systems
NASA Astrophysics Data System (ADS)
Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2013-09-01
Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M.
2017-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft® Excel® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet. PMID:28163564
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M
2016-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft ® Excel ® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet.
NASA Astrophysics Data System (ADS)
Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.
2018-05-01
The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto
2016-05-01
This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.
SINGER, A.; GILLESPIE, D.; NORBURY, J.; EISENBERG, R. S.
2009-01-01
Ion channels are proteins with a narrow hole down their middle that control a wide range of biological function by controlling the flow of spherical ions from one macroscopic region to another. Ion channels do not change their conformation on the biological time scale once they are open, so they can be described by a combination of Poisson and drift-diffusion (Nernst–Planck) equations called PNP in biophysics. We use singular perturbation techniques to analyse the steady-state PNP system for a channel with a general geometry and a piecewise constant permanent charge profile. We construct an outer solution for the case of a constant permanent charge density in three dimensions that is also a valid solution of the one-dimensional system. The asymptotical current–voltage (I–V ) characteristic curve of the device (obtained by the singular perturbation analysis) is shown to be a very good approximation of the numerical I–V curve (obtained by solving the system numerically). The physical constraint of non-negative concentrations implies a unique solution, i.e., for each given applied potential there corresponds a unique electric current (relaxing this constraint yields non-physical multiple solutions for sufficiently large voltages). PMID:19809600
High-Speed Numeric Function Generator Using Piecewise Quadratic Approximations
2007-09-01
application; User specifies the fuction to approxiamte. % % This programs turns the function provided into an inline function... PRIMARY = < primary file 1> < primary file 2> #SECONDARY = <secondary file 1> <secondary file 2> #CHIP2 = <file to compile to user chip
Estimation of variance in Cox's regression model with shared gamma frailties.
Andersen, P K; Klein, J P; Knudsen, K M; Tabanera y Palacios, R
1997-12-01
The Cox regression model with a shared frailty factor allows for unobserved heterogeneity or for statistical dependence between the observed survival times. Estimation in this model when the frailties are assumed to follow a gamma distribution is reviewed, and we address the problem of obtaining variance estimates for regression coefficients, frailty parameter, and cumulative baseline hazards using the observed nonparametric information matrix. A number of examples are given comparing this approach with fully parametric inference in models with piecewise constant baseline hazards.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Deconvolution of noisy transient signals: a Kalman filtering application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Zicker, J.E.
The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.
2013-04-22
Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with Nominal input L1 ‐based control generator This L1 adaptive control architecture uses data from the reference model
Consensus-Based Formation Control of a Class of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Joshi, Suresh; Gonzalez, Oscar R.
2014-01-01
This paper presents a consensus-based formation control scheme for autonomous multi-agent systems represented by double integrator dynamics. Assuming that the information graph topology consists of an undirected connected graph, a leader-based consensus-type control law is presented and shown to provide asymptotic formation stability when subjected to piecewise constant formation velocity commands. It is also shown that global asymptotic stability is preserved in the presence of (0, infinity)- sector monotonic non-decreasing actuator nonlinearities.
Locally Contractive Dynamics in Generalized Integrate-and-Fire Neurons*
Jimenez, Nicolas D.; Mihalas, Stefan; Brown, Richard; Niebur, Ernst; Rubin, Jonathan
2013-01-01
Integrate-and-fire models of biological neurons combine differential equations with discrete spike events. In the simplest case, the reset of the neuronal voltage to its resting value is the only spike event. The response of such a model to constant input injection is limited to tonic spiking. We here study a generalized model in which two simple spike-induced currents are added. We show that this neuron exhibits not only tonic spiking at various frequencies but also the commonly observed neuronal bursting. Using analytical and numerical approaches, we show that this model can be reduced to a one-dimensional map of the adaptation variable and that this map is locally contractive over a broad set of parameter values. We derive a sufficient analytical condition on the parameters for the map to be globally contractive, in which case all orbits tend to a tonic spiking state determined by the fixed point of the return map. We then show that bursting is caused by a discontinuity in the return map, in which case the map is piecewise contractive. We perform a detailed analysis of a class of piecewise contractive maps that we call bursting maps and show that they robustly generate stable bursting behavior. To the best of our knowledge, this work is the first to point out the intimate connection between bursting dynamics and piecewise contractive maps. Finally, we discuss bifurcations in this return map, which cause transitions between spiking patterns. PMID:24489486
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2015-11-01
The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2004-01-01
We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.
NASA Astrophysics Data System (ADS)
Westphal, T.; Nijssen, R. P. L.
2014-12-01
The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.
Solutions of some problems in applied mathematics using MACSYMA
NASA Technical Reports Server (NTRS)
Punjabi, Alkesh; Lam, Maria
1987-01-01
Various Symbolic Manipulation Programs (SMP) were tested to check the functioning of their commands and suitability under various operating systems. Support systems for SMP were found to be relatively better than the one for MACSYMA. The graphics facilities for MACSYMA do not work as expected under the UNIX operating system. Not all commands for MACSYMA function as described in the manuals. Shape representation is a central issue in computer graphics and computer-aided design. Aside from appearance, there are other application dependent, desirable properties like continuity to certain order, symmetry, axis-independence, and variation-diminishing properties. Several shape representations are studied, which include the Osculatory Method, a Piecewise Cubic Polynomial Method using two different slope estimates, Piecewise Cubic Hermite Form, a method by Harry McLaughlin, and a Piecewise Bezier Method. They are applied to collected physical and chemical data. Relative merits and demerits of these methods are examined. Kinematics of a single link, non-dissipative robot arm is studied using MACSYMA. Lagranian is set-up and Lagrange's equations are derived. From there, Hamiltonian equations of motion are obtained. Equations suggest that bifurcation of solutions can occur, depending upon the value of a single parameter. Using the characteristic function W, the Hamilton-Jacobi equation is derived. It is shown that the H-J equation can be solved in closed form. Analytical solutions to the H-J equation are obtained.
Unsteady flows in rotor-stator cascades
NASA Astrophysics Data System (ADS)
Lee, Yu-Tai; Bein, Thomas W.; Feng, Jin Z.; Merkle, Charles L.
1991-03-01
A time-accurate potential-flow calculation method has been developed for unsteady incompressible flows through two-dimensional multi-blade-row linear cascades. The method represents the boundary surfaces by distributing piecewise linear-vortex and constant-source singularities on discrete panels. A local coordinate is assigned to each independently moving object. Blade-shed vorticity is traced at each time step. The unsteady Kutta condition applied is nonlinear and requires zero blade trailing-edge loading at each time. Its influence on the solutions depends on the blade trailing-edge shapes. Steady biplane and cascade solutions are presented and compared to exact solutions and experimental data. Unsteady solutions are validated with the Wagner function for an airfoil moving impulsively from rest and the Theodorsen function for an oscillating airfoil. The shed vortex motion and its interaction with blades are calculated and compared to an analytic solution. For multi-blade-row cascade, the potential effect between blade rows is predicted using steady and quasi unsteady calculations. The accuracy of the predictions is demonstrated using experimental results for a one-stage turbine stator-rotor.
First and second order derivatives for optimizing parallel RF excitation waveforms.
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations. Copyright © 2015 Elsevier Inc. All rights reserved.
First and second order derivatives for optimizing parallel RF excitation waveforms
NASA Astrophysics Data System (ADS)
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations.
NASA Astrophysics Data System (ADS)
Aban, C. J. G.; Bacolod, R. O.; Confesor, M. N. P.
2015-06-01
A The White Noise Path Integral Approach is used in evaluating the B-cell density or the number of B-cell per unit volume for a basic type of immune system response based on the modeling done by Perelson and Wiegel. From the scaling principles of Perelson [1], the B- cell density is obtained where antigens and antibodies mutates and activation function f(|S-SA|) is defined describing the interaction between a specific antigen and a B-cell. If the activation function f(|S-SA|) is held constant, the major form of the B-cell density evaluated using white noise analysis is similar to the form of the B-cell density obtained by Perelson and Wiegel using a differential approach.A piecewise linear functionis also used to describe the activation f(|S-SA|). If f(|S-SA|) is zero, the density decreases exponentially. If f(|S-SA|) = S-SA-SB, the B- cell density increases exponentially until it reaches a certain maximum value. For f(|S-SA|) = 2SA-SB-S, the behavior of B-cell density is oscillating and remains to be in small values.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Microwave moisture sensing through use of a piecewise density-independent function
USDA-ARS?s Scientific Manuscript database
Microwave moisture sensing provides a means to determine nondestructively the amount of water in materials. This is accomplished through the correlation of dielectric properties with moisture in the material. In this study, linear relationships between a density-independent function of the dielectri...
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product,more » the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.« less
Limit cycles in planar piecewise linear differential systems with nonregular separation line
NASA Astrophysics Data System (ADS)
Cardin, Pedro Toniol; Torregrosa, Joan
2016-12-01
In this paper we deal with planar piecewise linear differential systems defined in two zones. We consider the case when the two linear zones are angular sectors of angles α and 2 π - α, respectively, for α ∈(0 , π) . We study the problem of determining lower bounds for the number of isolated periodic orbits in such systems using Melnikov functions. These limit cycles appear studying higher order piecewise linear perturbations of a linear center. It is proved that the maximum number of limit cycles that can appear up to a sixth order perturbation is five. Moreover, for these values of α, we prove the existence of systems with four limit cycles up to fifth order and, for α = π / 2, we provide an explicit example with five up to sixth order. In general, the nonregular separation line increases the number of periodic orbits in comparison with the case where the two zones are separated by a straight line.
NASA Astrophysics Data System (ADS)
Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu
2015-12-01
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.
Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared Region.
1979-12-31
exponent of the double exponential function were ’bumpy’ for some cases. Since the nature of the transmittance does not predict this behavior, we...T ,IS RECOMPUTED FOR THE ORIGIONAL DATA *USING THE PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, *’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU
Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared.
1980-04-01
predict this behavior, we conclude that the first method using linear function of x is accurate enough to be used in the actual application. The...PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, * ’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU AND THE RECOMPUTED’, * ’ TAU VALUES ARE COMPUTED.’////) 77
A Unified Theory for the Great Plains Nocturnal Low-Level Jet
NASA Astrophysics Data System (ADS)
Shapiro, A.; Fedorovich, E.; Rahimi, S.
2014-12-01
The nocturnal low-level jet (LLJ) is a warm-season atmospheric boundary layer phenomenon common to the Great Plains of the United States and other places worldwide, typically in regions east of mountain ranges. Low-level jets develop around sunset in fair weather conditions conducive to strong radiational cooling, reach peak intensity in the pre-dawn hours, and then dissipate with the onset of daytime convective mixing. In this study we consider the LLJ as a diurnal oscillation of a stably stratified atmosphere overlying a planar slope on the rotating Earth. The oscillations arise from diurnal cycles in both the heating of the slope (mechanism proposed by Holton in 1967) and the turbulent mixing (mechanism proposed by Blackadar in 1957). The governing equations are the equations of motion, incompressibility condition, and thermal energy in the Boussinesq approximation, with turbulent heat and momentum exchange parameterized through spatially constant but diurnally varying turbulent diffusion coefficients (diffusivities). Analytical solutions are obtained for diffusivities with piecewise constant waveforms (step-changes at sunrise and sunset) and slope temperatures/buoyancies with piecewise linear waveforms (saw-tooth function with minimum at sunrise and maximum before sunset). The jet characteristics are governed by eleven parameters: slope angle, Coriolis parameter, environmental buoyancy frequency, geostrophic wind strength, daytime and nighttime diffusivities, maximum (daytime) and minimum (nighttime) slope buoyancies, duration of daylight, lag time between peak slope buoyancy and sunset, and a Newtonian cooling time scale. An exploration of the parameter space yields results that are broadly consistent with findings particular to the Holton and Blackadar theories, and agree with climatological observations, for example, that stronger jets tend to occur over slopes of 0.15-0.25 degrees characteristic of the Great Plains. The solutions also yield intriguing predictions that peak jet strength increases with attenuation of the minimum surface buoyancy, and that the single most important parameter determining jet height is the nighttime diffusivity, with weaker nightime diffusion associated with smaller jet heights. These and other highlights will be discussed in the presentation.
A variational method for analyzing limit cycle oscillations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; MacLaurin, James
2018-06-01
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .
A piecewise smooth model of evolutionary game for residential mobility and segregation
NASA Astrophysics Data System (ADS)
Radi, D.; Gardini, L.
2018-05-01
The paper proposes an evolutionary version of a Schelling-type dynamic system to model the patterns of residential segregation when two groups of people are involved. The payoff functions of agents are the individual preferences for integration which are empirically grounded. Differently from Schelling's model, where the limited levels of tolerance are the driving force of segregation, in the current setup agents benefit from integration. Despite the differences, the evolutionary model shows a dynamics of segregation that is qualitatively similar to the one of the classical Schelling's model: segregation is always a stable equilibrium, while equilibria of integration exist only for peculiar configurations of the payoff functions and their asymptotic stability is highly sensitive to parameter variations. Moreover, a rich variety of integrated dynamic behaviors can be observed. In particular, the dynamics of the evolutionary game is regulated by a one-dimensional piecewise smooth map with two kink points that is rigorously analyzed using techniques recently developed for piecewise smooth dynamical systems. The investigation reveals that when a stable internal equilibrium exists, the bimodal shape of the map leads to several different kinds of bifurcations, smooth, and border collision, in a complicated interplay. Our global analysis can give intuitions to be used by a social planner to maximize integration through social policies that manipulate people's preferences for integration.
Hanni, Matti; Lantto, Perttu; Ilias, Miroslav; Jensen, Hans Jorgen Aagaard; Vaara, Juha
2007-10-28
Relativistic effects on the (129)Xe nuclear magnetic resonance shielding and (131)Xe nuclear quadrupole coupling (NQC) tensors are examined in the weakly bound Xe(2) system at different levels of theory including the relativistic four-component Dirac-Hartree-Fock (DHF) method. The intermolecular interaction-induced binary chemical shift delta, the anisotropy of the shielding tensor Deltasigma, and the NQC constant along the internuclear axis chi( parallel) are calculated as a function of the internuclear distance. DHF shielding calculations are carried out using gauge-including atomic orbitals. For comparison, the full leading-order one-electron Breit-Pauli perturbation theory (BPPT) is applied using a common gauge origin. Electron correlation effects are studied at the nonrelativistic (NR) coupled-cluster singles and doubles with perturbational triples [CCSD(T)] level of theory. The fully relativistic second-order Moller-Plesset many-body perturbation (DMP2) theory is used to examine the cross coupling between correlation and relativity on NQC. The same is investigated for delta and Deltasigma by BPPT with a density functional theory model. A semiquantitative agreement between the BPPT and DHF binary property curves is obtained for delta and Deltasigma in Xe(2). For these properties, the currently most complete theoretical description is obtained by a piecewise approximation where the uncorrelated relativistic DHF results obtained close to the basis-set limit are corrected, on the one hand, for NR correlation effects and, on the other hand, for the BPPT-based cross coupling of relativity and correlation. For chi( parallel), the fully relativistic DMP2 results obtain a correction for NR correlation effects beyond MP2. The computed temperature dependence of the second virial coefficient of the (129)Xe nuclear shielding is compared to experiment in Xe gas. Our best results, obtained with the piecewise approximation for the binary chemical shift combined with the previously published state of the art theoretical potential energy curve for Xe(2), are in excellent agreement with the experiment for the first time.
Radial Basis Function Based Quadrature over Smooth Surfaces
2016-03-24
Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29
A method of power analysis based on piecewise discrete Fourier transform
NASA Astrophysics Data System (ADS)
Xin, Miaomiao; Zhang, Yanchi; Xie, Da
2018-04-01
The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.
Neighboring Optimal Aircraft Guidance in a General Wind Environment
NASA Technical Reports Server (NTRS)
Jardin, Matthew R. (Inventor)
2003-01-01
Method and system for determining an optimal route for an aircraft moving between first and second waypoints in a general wind environment. A selected first wind environment is analyzed for which a nominal solution can be determined. A second wind environment is then incorporated; and a neighboring optimal control (NOC) analysis is performed to estimate an optimal route for the second wind environment. In particular examples with flight distances of 2500 and 6000 nautical miles in the presence of constant or piecewise linearly varying winds, the difference in flight time between a nominal solution and an optimal solution is 3.4 to 5 percent. Constant or variable winds and aircraft speeds can be used. Updated second wind environment information can be provided and used to obtain an updated optimal route.
Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas
2017-03-01
Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
NASA Astrophysics Data System (ADS)
Barucq, H.; Bendali, A.; Fares, M.; Mattesi, V.; Tordeux, S.
2017-02-01
A general symmetric Trefftz Discontinuous Galerkin method is built for solving the Helmholtz equation with piecewise constant coefficients. The construction of the corresponding local solutions to the Helmholtz equation is based on a boundary element method. A series of numerical experiments displays an excellent stability of the method relatively to the penalty parameters, and more importantly its outstanding ability to reduce the instabilities known as the "pollution effect" in the literature on numerical simulations of long-range wave propagation.
The Stiffness Variation of a Micro-Ring Driven by a Traveling Piecewise-Electrode
Li, Yingjie; Yu, Tao; Hu, Yuh-Chung
2014-01-01
In the practice of electrostatically actuated micro devices; the electrostatic force is implemented by sequentially actuated piecewise-electrodes which result in a traveling distributed electrostatic force. However; such force was modeled as a traveling concentrated electrostatic force in literatures. This article; for the first time; presents an analytical study on the stiffness variation of microstructures driven by a traveling piecewise electrode. The analytical model is based on the theory of shallow shell and uniform electrical field. The traveling electrode not only applies electrostatic force on the circular-ring but also alters its dynamical characteristics via the negative electrostatic stiffness. It is known that; when a structure is subjected to a traveling constant force; its natural mode will be resonated as the traveling speed approaches certain critical speeds; and each natural mode refers to exactly one critical speed. However; for the case of a traveling electrostatic force; the number of critical speeds is more than that of the natural modes. This is due to the fact that the traveling electrostatic force makes the resonant frequencies of the forward and backward traveling waves of the circular-ring different. Furthermore; the resonance and stability can be independently controlled by the length of the traveling electrode; though the driving voltage and traveling speed of the electrostatic force alter the dynamics and stabilities of microstructures. This paper extends the fundamental insights into the electromechanical behavior of microstructures driven by electrostatic forces as well as the future development of MEMS/NEMS devices with electrostatic actuation and sensing. PMID:25230308
The stiffness variation of a micro-ring driven by a traveling piecewise-electrode.
Li, Yingjie; Yu, Tao; Hu, Yuh-Chung
2014-09-16
In the practice of electrostatically actuated micro devices; the electrostatic force is implemented by sequentially actuated piecewise-electrodes which result in a traveling distributed electrostatic force. However; such force was modeled as a traveling concentrated electrostatic force in literatures. This article; for the first time; presents an analytical study on the stiffness variation of microstructures driven by a traveling piecewise electrode. The analytical model is based on the theory of shallow shell and uniform electrical field. The traveling electrode not only applies electrostatic force on the circular-ring but also alters its dynamical characteristics via the negative electrostatic stiffness. It is known that; when a structure is subjected to a traveling constant force; its natural mode will be resonated as the traveling speed approaches certain critical speeds; and each natural mode refers to exactly one critical speed. However; for the case of a traveling electrostatic force; the number of critical speeds is more than that of the natural modes. This is due to the fact that the traveling electrostatic force makes the resonant frequencies of the forward and backward traveling waves of the circular-ring different. Furthermore; the resonance and stability can be independently controlled by the length of the traveling electrode; though the driving voltage and traveling speed of the electrostatic force alter the dynamics and stabilities of microstructures. This paper extends the fundamental insights into the electromechanical behavior of microstructures driven by electrostatic forces as well as the future development of MEMS/NEMS devices with electrostatic actuation and sensing.
Identification of Piecewise Linear Uniform Motion Blur
NASA Astrophysics Data System (ADS)
Patanukhom, Karn; Nishihara, Akinori
A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
2017-01-01
Scheme III (piecewise linear) and V (piecewise parabolic) of Van Leer are shown to yield identical solutions provided the initial conditions are chosen in an appropriate manner. This result is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The result also shows a key connection between the approaches of discontinuous and continuous representations.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
NASA Astrophysics Data System (ADS)
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Functional Data Approximation on Bounded Domains using Polygonal Finite Elements.
Cao, Juan; Xiao, Yanyang; Chen, Zhonggui; Wang, Wenping; Bajaj, Chandrajit
2018-07-01
We construct and analyze piecewise approximations of functional data on arbitrary 2D bounded domains using generalized barycentric finite elements, and particularly quadratic serendipity elements for planar polygons. We compare approximation qualities (precision/convergence) of these partition-of-unity finite elements through numerical experiments, using Wachspress coordinates, natural neighbor coordinates, Poisson coordinates, mean value coordinates, and quadratic serendipity bases over polygonal meshes on the domain. For a convex n -sided polygon, the quadratic serendipity elements have 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, rather than the usual n ( n + 1)/2 basis functions to achieve quadratic convergence. Two greedy algorithms are proposed to generate Voronoi meshes for adaptive functional/scattered data approximations. Experimental results show space/accuracy advantages for these quadratic serendipity finite elements on polygonal domains versus traditional finite elements over simplicial meshes. Polygonal meshes and parameter coefficients of the quadratic serendipity finite elements obtained by our greedy algorithms can be further refined using an L 2 -optimization to improve the piecewise functional approximation. We conduct several experiments to demonstrate the efficacy of our algorithm for modeling features/discontinuities in functional data/image approximation.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Transformations based on continuous piecewise-affine velocity fields
Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...
2017-01-11
Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less
Transformations Based on Continuous Piecewise-Affine Velocity Fields
Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.
2018-01-01
We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517
Nie, Xiaobing; Zheng, Wei Xing
2015-05-01
This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Locomotion of C. elegans: A Piecewise-Harmonic Curvature Representation of Nematode Behavior
Padmanabhan, Venkat; Khan, Zeina S.; Solomon, Deepak E.; Armstrong, Andrew; Rumbaugh, Kendra P.; Vanapalli, Siva A.; Blawzdziewicz, Jerzy
2012-01-01
Caenorhabditis elegans, a free-living soil nematode, displays a rich variety of body shapes and trajectories during its undulatory locomotion in complex environments. Here we show that the individual body postures and entire trails of C. elegans have a simple analytical description in curvature representation. Our model is based on the assumption that the curvature wave is generated in the head segment of the worm body and propagates backwards. We have found that a simple harmonic function for the curvature can capture multiple worm shapes during the undulatory movement. The worm body trajectories can be well represented in terms of piecewise sinusoidal curvature with abrupt changes in amplitude, wavevector, and phase. PMID:22792224
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-18
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less
A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors
Zhang, Tengfei; Lewis, E. E.; Smith, M. A.; ...
2017-04-18
A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less
A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Tengfei; Lewis, E. E.; Smith, M. A.
A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less
Effect of speed matching on fundamental diagram of pedestrian flow
NASA Astrophysics Data System (ADS)
Fu, Zhijian; Luo, Lin; Yang, Yue; Zhuang, Yifan; Zhang, Peitong; Yang, Lizhong; Yang, Hongtai; Ma, Jian; Zhu, Kongjin; Li, Yanlai
2016-09-01
Properties of pedestrian may change along their moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study the speed matching effect (a pedestrian adjusts his velocity constantly to the average velocity of his neighbors) and its influence on the density-velocity relationship (a pedestrian adjust his velocity to the surrounding density), known as the fundamental diagram of the pedestrian flow. By the means of the cellular automaton, the simulation results fit well with the empirical data, indicating the great advance of the discrete model for pedestrian dynamics. The results suggest that the system velocity and flow rate increase obviously under a big noise, i.e., a diverse composition of pedestrian crowd, especially in the region of middle or high density. Because of the temporary effect, the speed matching has little influence on the fundamental diagram. Along the entire density, the relationship between the step length and the average pedestrian velocity is a piecewise function combined two linear functions. The number of conflicts reaches the maximum with the pedestrian density of 2.5 m-2, while decreases by 5.1% with the speed matching.
Optimal control of parametric oscillations of compressed flexible bars
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.
High resolution A/D conversion based on piecewise conversion at lower resolution
Terwilliger, Steve [Albuquerque, NM
2012-06-05
Piecewise conversion of an analog input signal is performed utilizing a plurality of relatively lower bit resolution A/D conversions. The results of this piecewise conversion are interpreted to achieve a relatively higher bit resolution A/D conversion without sampling frequency penalty.
Mitigation of epidemics in contact networks through optimal contact adaptation *
Youssef, Mina; Scoglio, Caterina
2013-01-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209
Mitigation of epidemics in contact networks through optimal contact adaptation.
Youssef, Mina; Scoglio, Caterina
2013-08-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Theoretical analysis of nonnuniform skin effects on drawdown variation
NASA Astrophysics Data System (ADS)
Chen, C.-S.; Chang, C. C.; Lee, M. S.
2003-04-01
Under field conditions, the skin zone surrounding the well screen is rarely uniformly distributed in the vertical direction. To understand such non-uniform skin effects on drawdown variation, we assume the skin factor to be an arbitrary, continuous or piece-wise continuous function S_k(z), and incorporate it into a well hydraulics model for constant rate pumping in a homogeneous, vertically anisotropic, confined aquifer. Solutions of depth-specific drawdown and vertical average drawdown are determined by using the Gram-Schmidt method. The non-uniform effects of S_k(z) in vertical average drawdown are averaged out, and can be represented by a constant skin factor S_k. As a result, drawdown of fully penetrating observation wells can be analyzed by appropriate well hydraulics theories assuming a constant skin factor. The S_k is the vertical average value of S_k(z) weighted by the well bore flux q_w(z). In depth-specific drawdown, however, the non-uniform effects of S_k(z) vary with radial and vertical distances, which are under the influence of the vertical profile of S_k(z) and the vertical anisotropy ratio, K_r/K_z. Therefore, drawdown of partially penetrating observation wells may reflect the vertical anisotropy as well as the non-uniformity of the skin zone. The method of determining S_k(z) developed herein involves the use of q_w(z) as can be measured with the borehole flowmeter, and K_r/K_z and S_k as can be determined by the conventional pumping test.
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
Variable horizon in a peridynamic medium
Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo
2015-12-10
Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less
Nonlinear Dynamics of Turbulent Thermals in Shear Flow
NASA Astrophysics Data System (ADS)
Ingel, L. Kh.
2018-03-01
The nonlinear integral model of a turbulent thermal is extended to the case of the horizontal component of its motion relative to the medium (e.g., thermal floating-up in shear flow). In contrast to traditional models, the possibility of a heat source in the thermal is taken into account. For a piecewise constant vertical profile of the horizontal velocity of the medium and a constant vertical velocity shear, analytical solutions are obtained which describe different modes of dynamics of thermals. The nonlinear interaction between the horizontal and vertical components of thermal motion is studied because each of the components influences the rate of entrainment of the surrounding medium, i.e., the growth rate of the thermal size and, hence, its mobility. It is shown that the enhancement of the entrainment of the medium due to the interaction between the thermal and the cross flow can lead to a significant decrease in the mobility of the thermal.
The estimation of material and patch parameters in a PDE-based circular plate model
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.
1995-01-01
The estimation of material and patch parameters for a system involving a circular plate, to which piezoceramic patches are bonded, is considered. A partial differential equation (PDE) model for the thin circular plate is used with the passive and active contributions form the patches included in the internal and external bending moments. This model contains piecewise constant parameters describing the density, flexural rigidity, Poisson ratio, and Kelvin-Voigt damping for the system as well as patch constants and a coefficient for viscous air damping. Examples demonstrating the estimation of these parameters with experimental acceleration data and a variety of inputs to the experimental plate are presented. By using a physically-derived PDE model to describe the system, parameter sets consistent across experiments are obtained, even when phenomena such as damping due to electric circuits affect the system dynamics.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
A partially penalty immersed Crouzeix-Raviart finite element method for interface problems.
An, Na; Yu, Xijun; Chen, Huanzhen; Huang, Chaobao; Liu, Zhongyan
2017-01-01
The elliptic equations with discontinuous coefficients are often used to describe the problems of the multiple materials or fluids with different densities or conductivities or diffusivities. In this paper we develop a partially penalty immersed finite element (PIFE) method on triangular grids for anisotropic flow models, in which the diffusion coefficient is a piecewise definite-positive matrix. The standard linear Crouzeix-Raviart type finite element space is used on non-interface elements and the piecewise linear Crouzeix-Raviart type immersed finite element (IFE) space is constructed on interface elements. The piecewise linear functions satisfying the interface jump conditions are uniquely determined by the integral averages on the edges as degrees of freedom. The PIFE scheme is given based on the symmetric, nonsymmetric or incomplete interior penalty discontinuous Galerkin formulation. The solvability of the method is proved and the optimal error estimates in the energy norm are obtained. Numerical experiments are presented to confirm our theoretical analysis and show that the newly developed PIFE method has optimal-order convergence in the [Formula: see text] norm as well. In addition, numerical examples also indicate that this method is valid for both the isotropic and the anisotropic elliptic interface problems.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
Geometric constrained variational calculus I: Piecewise smooth extremals
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2015-05-01
A geometric setup for constrained variational calculus is presented. The analysis deals with the study of the extremals of an action functional defined on piecewise differentiable curves, subject to differentiable, non-holonomic constraints. Special attention is paid to the tensorial aspects of the theory. As far as the kinematical foundations are concerned, a fully covariant scheme is developed through the introduction of the concept of infinitesimal control. The standard classification of the extremals into normal and abnormal ones is discussed, pointing out the existence of an algebraic algorithm assigning to each admissible curve a corresponding abnormality index, related to the co-rank of a suitable linear map. Attention is then shifted to the study of the first variation of the action functional. The analysis includes a revisitation of Pontryagin's equations and of the Lagrange multipliers method, as well as a reformulation of Pontryagin's algorithm in Hamiltonian terms. The analysis is completed by a general result, concerning the existence of finite deformations with fixed endpoints.
A method for analyzing clustered interval-censored data based on Cox's model.
Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau
2013-02-28
Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
ON THE BRIGHTNESS AND WAITING-TIME DISTRIBUTIONS OF A TYPE III RADIO STORM OBSERVED BY STEREO/WAVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastwood, J. P.; Hudson, H. S.; Krucker, S.
2010-01-10
Type III solar radio storms, observed at frequencies below {approx}16 MHz by space-borne radio experiments, correspond to the quasi-continuous, bursty emission of electron beams onto open field lines above active regions. The mechanisms by which a storm can persist in some cases for more than a solar rotation whilst exhibiting considerable radio activity are poorly understood. To address this issue, the statistical properties of a type III storm observed by the STEREO/WAVES radio experiment are presented, examining both the brightness distribution and (for the first time) the waiting-time distribution (WTD). Single power-law behavior is observed in the number distribution asmore » a function of brightness; the power-law index is {approx}2.1 and is largely independent of frequency. The WTD is found to be consistent with a piecewise-constant Poisson process. This indicates that during the storm individual type III bursts occur independently and suggests that the storm dynamics are consistent with avalanche-type behavior in the underlying active region.« less
A boundary-value problem for a first-order hyperbolic system in a two-dimensional domain
NASA Astrophysics Data System (ADS)
Zhura, N. A.; Soldatov, A. P.
2017-06-01
We consider a strictly hyperbolic first-order system of three equations with constant coefficients in a bounded piecewise-smooth domain. The boundary of the domain is assumed to consist of six smooth non-characteristic arcs. A boundary-value problem in this domain is posed by alternately prescribing one or two linear combinations of the components of the solution on these arcs. We show that this problem has a unique solution under certain additional conditions on the coefficients of these combinations, the boundary of the domain and the behaviour of the solution near the characteristics passing through the corner points of the domain.
Effective Methods for Solving Band SLEs after Parabolic Nonlinear PDEs
NASA Astrophysics Data System (ADS)
Veneva, Milena; Ayriyan, Alexander
2018-04-01
A class of models of heat transfer processes in a multilayer domain is considered. The governing equation is a nonlinear heat-transfer equation with different temperature-dependent densities and thermal coefficients in each layer. Homogeneous Neumann boundary conditions and ideal contact ones are applied. A finite difference scheme on a special uneven mesh with a second-order approximation in the case of a piecewise constant spatial step is built. This discretization leads to a pentadiagonal system of linear equations (SLEs) with a matrix which is neither diagonally dominant, nor positive definite. Two different methods for solving such a SLE are developed - diagonal dominantization and symbolic algorithms.
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
NASA Astrophysics Data System (ADS)
Jex, Michal; Lotoreichik, Vladimir
2016-02-01
Let Λ ⊂ ℝ2 be a non-closed piecewise-C1 curve, which is either bounded with two free endpoints or unbounded with one free endpoint. Let u±|Λ ∈ L2(Λ) be the traces of a function u in the Sobolev space H1(ℝ2∖Λ) onto two faces of Λ. We prove that for a wide class of shapes of Λ the Schrödinger operator Hω Λ with δ'-interaction supported on Λ of strength ω ∈ L∞(Λ; ℝ) associated with the quadratic form H 1 ( R 2 ∖ Λ ) ∋ u ↦ ∫ R 2 |" separators=" ∇ u | 2 d x - ∫ Λ ω |" separators=" u + | Λ - u - | Λ | 2 d s has no negative spectrum provided that ω is pointwise majorized by a strictly positive function explicitly expressed in terms of Λ. If, additionally, the domain ℝ2∖Λ is quasi-conical, we show that σ ( Hω Λ ) = [ 0 , + ∞ ) . For a bounded curve Λ in our class and non-varying interaction strength ω ∈ ℝ, we derive existence of a constant ω∗ > 0 such that σ ( Hω Λ ) = [ 0 , + ∞ ) for all ω ∈ (-∞, ω∗]; informally speaking, bound states are absent in the weak coupling regime.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
Least Squares Approximation By G1 Piecewise Parametric Cubes
1993-12-01
ADDRESS(ES) 10.SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not...CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (maximum 200 words) Parametric piecewise cubic polynomials are used throughout...piecewise parametric cubic polynomial to a sequence of ordered points in the plane. Cubic Bdzier curves are used as a basis. The parameterization, the
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM. (R827028)
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme––the piecewise parabolic method (PPM)––for computing advective solution fields; a weight function capable o...
A RUTCOR Project in Discrete Applied Mathematics
1990-02-20
representations of smooth piecewise polynomial functions over triangulated regions have led in particular to the conclusion that Groebner basis methods of...Reversing Number of a Digraph," in preparation. 4. Billera, L.J., and Rose, L.L., " Groebner Basis Methods for Multivariate Splines," RRR 1-89, January
NASA Astrophysics Data System (ADS)
Aioanei, Daniel; Samorì, Bruno; Brucale, Marco
2009-12-01
Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.
Variational models for discontinuity detection
NASA Astrophysics Data System (ADS)
Vitti, Alfonso; Battista Benciolini, G.
2010-05-01
The Mumford-Shah variational model produces a smooth approximation of the data and detects data discontinuities by solving a minimum problem involving an energy functional. The Blake-Zisserman model permits also the detection of discontinuities in the first derivative of the approximation. This model can result in a quasi piece-wise linear approximation, whereas the Mumford-Shah can result in a quasi piece-wise constant approximation. The two models are well known in the mathematical literature and are widely adopted in computer vision for image segmentation. In Geodesy the Blake-Zisserman model has been applied successfully to the detection of cycle-slips in linear combinations of GPS measurements. Few attempts to apply the model to time series of coordinates have been done so far. The problem of detecting discontinuities in time series of GNSS coordinates is well know and its relevance increases as the quality of geodetic measurements, analysis techniques, models and products improves. The application of the Blake-Zisserman model appears reasonable and promising due to the model characteristic to detect both position and velocity discontinuities in the same time series. The detection of position and velocity changes is of great interest in geophysics where the discontinuity itself can be the very relevant object. In the work for the realization of reference frames, detecting position and velocity discontinuities may help to define models that can handle non-linear motions. In this work the Mumford-Shah and the Blake-Zisserman models are briefly presented, the treatment is carried out from a practical viewpoint rather than from a theoretical one. A set of time series of GNSS coordinates has been processed and the results are presented in order to highlight the capabilities and the weakness of the variational approach. A first attempt to derive some indication for the automatic set up of the model parameters has been done. The underlying relation that could links the parameter values to the statistical properties of the data has been investigated.
Three-Dimensional Piecewise-Continuous Class-Shape Transformation of Wings
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
Class-Shape Transformation (CST) is a popular method for creating analytical representations of the surface coordinates of various components of aerospace vehicles. A wide variety of two- and three-dimensional shapes can be represented analytically using only a modest number of parameters, and the surface representation is smooth and continuous to as fine a degree as desired. This paper expands upon the original two-dimensional representation of airfoils to develop a generalized three-dimensional CST parametrization scheme that is suitable for a wider range of aircraft wings than previous formulations, including wings with significant non-planar shapes such as blended winglets and box wings. The method uses individual functions for the spanwise variation of airfoil shape, chord, thickness, twist, and reference axis coordinates to build up the complete wing shape. An alternative formulation parameterizes the slopes of the reference axis coordinates in order to relate the spanwise variation to the tangents of the sweep and dihedral angles. Also discussed are methods for fitting existing wing surface coordinates, including the use of piecewise equations to handle discontinuities, and mathematical formulations of geometric continuity constraints. A subsonic transport wing model is used as an example problem to illustrate the application of the methodology and to quantify the effects of piecewise representation and curvature constraints.
Active distribution network planning considering linearized system loss
NASA Astrophysics Data System (ADS)
Li, Xiao; Wang, Mingqiang; Xu, Hao
2018-02-01
In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.
Mamey, Mary Rose; Barbosa-Leiker, Celestina; McPherson, Sterling; Burns, G Leonard; Parks, Craig; Roll, John
2015-12-01
Researchers often want to examine 2 comorbid conditions simultaneously. One strategy to do so is through the use of parallel latent growth curve modeling (LGCM). This statistical technique allows for the simultaneous evaluation of 2 disorders to determine the explanations and predictors of change over time. Additionally, a piecewise model can help identify whether there are more than 2 growth processes within each disorder (e.g., during a clinical trial). A parallel piecewise LGCM was applied to self-reported attention-deficit/hyperactivity disorder (ADHD) and self-reported substance use symptoms in 303 adolescents enrolled in cognitive-behavioral therapy treatment for a substance use disorder and receiving either oral-methylphenidate or placebo for ADHD across 16 weeks. Assessing these 2 disorders concurrently allowed us to determine whether elevated levels of 1 disorder predicted elevated levels or increased risk of the other disorder. First, a piecewise growth model measured ADHD and substance use separately. Next, a parallel piecewise LGCM was used to estimate the regressions across disorders to determine whether higher scores at baseline of the disorders (i.e., ADHD or substance use disorder) predicted rates of change in the related disorder. Finally, treatment was added to the model to predict change. While the analyses revealed no significant relationships across disorders, this study explains and applies a parallel piecewise growth model to examine the developmental processes of comorbid conditions over the course of a clinical trial. Strengths of piecewise and parallel LGCMs for other addictions researchers interested in examining dual processes over time are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Bo, Zhang; Li, Jin-Ling; Wang, Guan-Gli
2002-01-01
We checked the dependence of the estimation of parameters on the choice of piecewise interval in the continuous piecewise linear modeling of the residual clock and atmosphere effects by single analysis of 27 VLBI experiments involving Shanghai station (Seshan 25m). The following are tentatively shown: (1) Different choices of the piecewise interval lead to differences in the estimation of station coordinates and in the weighted root mean squares ( wrms ) of the delay residuals, which can be of the order of centimeters or dozens of picoseconds respectively. So the choice of piecewise interval should not be arbitrary . (2) The piecewise interval should not be too long, otherwise the short - term variations in the residual clock and atmospheric effects can not be properly modeled. While in order to maintain enough degrees of freedom in parameter estimation, the interval can not be too short, otherwise the normal equation may become near or solely singular and the noises can not be constrained as well. Therefore the choice of the interval should be within some reasonable range. (3) Since the conditions of clock and atmosphere are different from experiment to experiment and from station to station, the reasonable range of the piecewise interval should be tested and chosen separately for each experiment as well as for each station by real data analysis. This is really arduous work in routine data analysis. (4) Generally speaking, with the default interval for clock as 60min, the reasonable range of piecewise interval for residual atmospheric effect modeling is between 10min to 40min, while with the default interval for atmosphere as 20min, that for residual clock behavior is between 20min to 100min.
A fast and accurate online sequential learning algorithm for feedforward networks.
Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N
2006-11-01
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.
Theory of Turing Patterns on Time Varying Networks.
Petit, Julien; Lauwens, Ben; Fanelli, Duccio; Carletti, Timoteo
2017-10-06
The process of pattern formation for a multispecies model anchored on a time varying network is studied. A nonhomogeneous perturbation superposed to an homogeneous stable fixed point can be amplified following the Turing mechanism of instability, solely instigated by the network dynamics. By properly tuning the frequency of the imposed network evolution, one can make the examined system behave as its averaged counterpart, over a finite time window. This is the key observation to derive a closed analytical prediction for the onset of the instability in the time dependent framework. Continuously and piecewise constant periodic time varying networks are analyzed, setting the framework for the proposed approach. The extension to nonperiodic settings is also discussed.
NASA Astrophysics Data System (ADS)
Guo, Yongfeng; Shen, Yajun; Tan, Jianguo
2016-09-01
The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.
Hypothalamic stimulation and baroceptor reflex interaction on renal nerve activity.
NASA Technical Reports Server (NTRS)
Wilson, M. F.; Ninomiya, I.; Franz, G. N.; Judy, W. V.
1971-01-01
The basal level of mean renal nerve activity (MRNA-0) measured in anesthetized cats was found to be modified by the additive interaction of hypothalamic and baroceptor reflex influences. Data were collected with the four major baroceptor nerves either intact or cut, and with mean aortic pressure (MAP) either clamped with a reservoir or raised with l-epinephrine. With intact baroceptor nerves, MRNA stayed essentially constant at level MRNA-0 for MAP below an initial pressure P1, and fell approximately linearly to zero as MAP was raised to P2. Cutting the baroceptor nerves kept MRNA at MRNA-0 (assumed to represent basal central neural output) independent of MAP. The addition of hypothalamic stimulation produced nearly constant increments in MRNA for all pressure levels up to P2, with complete inhibition at some level above P2. The increments in MRNA depended on frequency and location of the stimulus. A piecewise linear model describes MRNA as a linear combination of hypothalamic, basal central neural, and baroceptor reflex activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.
2007-02-02
First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms wheremore » the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly decrease. The average relative difference in peak height after piecewise alignment was –0.20%. Piecewise alignment caused the peak areas to either stay the same, slightly increase, or slightly decrease. The average absolute relative difference in area after piecewise alignment was 0.15%.« less
Effect of smoothing on robust chaos.
Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae
2010-08-01
In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.
Sim, K S; Yeap, Z X; Tso, C P
2016-11-01
An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Linear response formula for piecewise expanding unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Smania, Daniel
2008-04-01
The average R(t)=\\int \\varphi\\,\\rmd \\mu_t of a smooth function phiv with respect to the SRB measure μt of a smooth one-parameter family ft of piecewise expanding interval maps is not always Lipschitz (Baladi 2007 Commun. Math. Phys. 275 839-59, Mazzolena 2007 Master's Thesis Rome 2, Tor Vergata). We prove that if ft is tangent to the topological class of f, and if ∂t ft|t = 0 = X circle f, then R(t) is differentiable at zero, and R'(0) coincides with the resummation proposed (Baladi 2007) of the (a priori divergent) series \\sum_{n=0}^\\infty \\int X(y) \\partial_y (\\varphi \\circ f^n)(y)\\,\\rmd \\mu_0(y) given by Ruelle's conjecture. In fact, we show that t map μt is differentiable within Radon measures. Linear response is violated if and only if ft is transversal to the topological class of f.
Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling
NASA Astrophysics Data System (ADS)
Dobronets, B. S.; Popova, O. A.
2018-05-01
Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.
A novel approach to piecewise analytic agricultural machinery path reconstruction
NASA Astrophysics Data System (ADS)
Wörz, Sascha; Mederle, Michael; Heizinger, Valentin; Bernhardt, Heinz
2017-12-01
Before analysing machinery operation in fields, it has to be coped with the problem that the GPS signals of GPS receivers located on the machines contain measurement noise, are time-discrete, and the underlying physical system describing the positions, axial and absolute velocities, angular rates and angular orientation of the operating machines during the whole working time are unknown. This research work presents a new three-dimensional mathematical approach using kinematic relations based on control variables as Euler angular velocities and angles and a discrete target control problem, such that the state control function is given by the sum of squared residuals involving the state and control variables to get such a physical system, which yields a noise-free and piecewise analytic representation of the positions, velocities, angular rates and angular orientation. It can be used for a further detailed study and analysis of the problem of why agricultural vehicles operate in practice as they do.
Payment contracts in a preventive health care system: a perspective from operations management.
Yaesoubi, Reza; Roberts, Stephen D
2011-12-01
We consider a health care system consisting of two noncooperative parties: a health purchaser (payer) and a health provider, where the interaction between the two parties is governed by a payment contract. We determine the contracts that coordinate the health purchaser-health provider relationship; i.e. the contracts that maximize the population's welfare while allowing each entity to optimize its own objective function. We show that under certain conditions (1) when the number of customers for a preventive medical intervention is verifiable, there exists a gate-keeping contract and a set of concave piecewise linear contracts that coordinate the system, and (2) when the number of customers is not verifiable, there exists a contract of bounded linear form and a set of incentive-feasible concave piecewise linear contracts that coordinate the system. Copyright © 2011 Elsevier B.V. All rights reserved.
Sliding mode control of outbreaks of emerging infectious diseases.
Xiao, Yanni; Xu, Xiaxia; Tang, Sanyi
2012-10-01
This paper proposes and analyzes a mathematical model of an infectious disease system with a piecewise control function concerning threshold policy for disease management strategy. The proposed models extend the classic models by including a piecewise incidence rate to represent control or precautionary measures being triggered once the number of infected individuals exceeds a threshold level. The long-term behaviour of the proposed non-smooth system under this strategy consists of the so-called sliding motion-a very rapid switching between application and interruption of the control action. Model solutions ultimately approach either one of two endemic states for two structures or the sliding equilibrium on the switching surface, depending on the threshold level. Our findings suggest that proper combinations of threshold densities and control intensities based on threshold policy can either preclude outbreaks or lead the number of infected to a previously chosen level.
The Hindmarsh-Rose neuron model: bifurcation analysis and piecewise-linear approximations.
Storace, Marco; Linaro, Daniele; de Lange, Enno
2008-09-01
This paper provides a global picture of the bifurcation scenario of the Hindmarsh-Rose model. A combination between simulations and numerical continuations is used to unfold the complex bifurcation structure. The bifurcation analysis is carried out by varying two bifurcation parameters and evidence is given that the structure that is found is universal and appears for all combinations of bifurcation parameters. The information about the organizing principles and bifurcation diagrams are then used to compare the dynamics of the model with that of a piecewise-linear approximation, customized for circuit implementation. A good match between the dynamical behaviors of the models is found. These results can be used both to design a circuit implementation of the Hindmarsh-Rose model mimicking the diversity of neural response and as guidelines to predict the behavior of the model as well as its circuit implementation as a function of parameters. (c) 2008 American Institute of Physics.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
NASA Astrophysics Data System (ADS)
Zolotaryuk, A. V.
2017-06-01
Several families of one-point interactions are derived from the system consisting of two and three δ-potentials which are regularized by piecewise constant functions. In physical terms such an approximating system represents two or three extremely thin layers separated by some distance. The two-scale squeezing of this heterostructure to one point as both the width of δ-approximating functions and the distance between these functions simultaneously tend to zero is studied using the power parameterization through a squeezing parameter \\varepsilon \\to 0 , so that the intensity of each δ-potential is cj =aj \\varepsilon1-μ , aj \\in {R} , j = 1, 2, 3, the width of each layer l =\\varepsilon and the distance between the layers r = c\\varepsilon^τ , c > 0. It is shown that at some values of the intensities a 1, a 2 and a 3, the transmission across the limit point potentials is non-zero, whereas outside these (resonance) values the one-point interactions are opaque splitting the system at the point of singularity into two independent subsystems. Within the interval 1 < μ < 2 , the resonance sets consist of two curves on the (a_1, a_2) -plane and three surfaces in the (a_1, a_2, a_3) -space. As the parameter μ approaches the value μ =2 , three types of splitting the one-point interactions into countable families are observed.
Interstellar photoelectric absorption cross sections, 0.03-10 keV
NASA Technical Reports Server (NTRS)
Morrison, R.; Mccammon, D.
1983-01-01
An effective absorption cross section per hydrogen atom has been calculated as a function of energy in the 0.03-10 keV range using the most recent atomic cross section and cosmic abundance data. Coefficients of a piecewise polynomial fit to the numerical results are given to allow convenient application in automated calculations.
Bifurcation from an invariant to a non-invariant attractor
NASA Astrophysics Data System (ADS)
Mandal, D.
2016-12-01
Switching dynamical systems are very common in many areas of physics and engineering. We consider a piecewise linear map that periodically switches between more than one different functional forms. We show that in such systems it is possible to have a border collision bifurcation where the system transits from an invariant attractor to a non-invariant attractor.
Numerical Recovering of a Speed of Sound by the BC-Method in 3D
NASA Astrophysics Data System (ADS)
Pestov, Leonid; Bolgova, Victoria; Danilin, Alexandr
We develop the numerical algorithm for solving the inverse problem for the wave equation by the Boundary Control method. The problem, which we refer to as a forward one, is an initial boundary value problem for the wave equation with zero initial data in the bounded domain. The inverse problem is to find the speed of sound c(x) by the measurements of waves induced by a set of boundary sources. The time of observation is assumed to be greater then two acoustical radius of the domain. The numerical algorithm for sound reconstruction is based on two steps. The first one is to find a (sufficiently large) number of controls {f_j} (the basic control is defined by the position of the source and some time delay), which generates the same number of known harmonic functions, i.e. Δ {u_j}(.,T) = 0 , where {u_j} is the wave generated by the control {f_j} . After that the linear integral equation w.r.t. the speed of sound is obtained. The piecewise constant model of the speed is used. The result of numerical testing of 3-dimensional model is presented.
Tuning the Fano factor of graphene via Fermi velocity modulation
NASA Astrophysics Data System (ADS)
Lima, Jonas R. F.; Barbosa, Anderson L. R.; Bezerra, C. G.; Pereira, Luiz Felipe C.
2018-03-01
In this work we investigate the influence of a Fermi velocity modulation on the Fano factor of periodic and quasi-periodic graphene superlattices. We consider the continuum model and use the transfer matrix method to solve the Dirac-like equation for graphene where the electrostatic potential, energy gap and Fermi velocity are piecewise constant functions of the position x. We found that in the presence of an energy gap, it is possible to tune the energy of the Fano factor peak and consequently the location of the Dirac point, by a modulation in the Fermi velocity. Hence, the peak of the Fano factor can be used experimentally to identify the Dirac point. We show that for higher values of the Fermi velocity the Fano factor goes below 1/3 at the Dirac point. Furthermore, we show that in periodic superlattices the location of Fano factor peaks is symmetric when the Fermi velocity vA and vB is exchanged, however by introducing quasi-periodicity the symmetry is lost. The Fano factor usually holds a universal value for a specific transport regime, which reveals that the possibility of controlling it in graphene is a notable result.
Brittle failure of rock: A review and general linear criterion
NASA Astrophysics Data System (ADS)
Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan
2018-07-01
A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.
Moving-window dynamic optimization: design of stimulation profiles for walking.
Dosen, Strahinja; Popović, Dejan B
2009-05-01
The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.
Time-Dependent Behavior of Diabase and a Nonlinear Creep Model
NASA Astrophysics Data System (ADS)
Yang, Wendong; Zhang, Qiangyong; Li, Shucai; Wang, Shugang
2014-07-01
Triaxial creep tests were performed on diabase specimens from the dam foundation of the Dagangshan hydropower station, and the typical characteristics of creep curves were analyzed. Based on the test results under different stress levels, a new nonlinear visco-elasto-plastic creep model with creep threshold and long-term strength was proposed by connecting an instantaneous elastic Hooke body, a visco-elasto-plastic Schiffman body, and a nonlinear visco-plastic body in series mode. By introducing the nonlinear visco-plastic component, this creep model can describe the typical creep behavior, which includes the primary creep stage, the secondary creep stage, and the tertiary creep stage. Three-dimensional creep equations under constant stress conditions were deduced. The yield approach index (YAI) was used as the criterion for the piecewise creep function to resolve the difficulty in determining the creep threshold value and the long-term strength. The expression of the visco-plastic component was derived in detail and the three-dimensional central difference form was given. An example was used to verify the credibility of the model. The creep parameters were identified, and the calculated curves were in good agreement with the experimental curves, indicating that the model is capable of replicating the physical processes.
Mass-corrections for the conservative coupling of flow and transport on collocated meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich
2016-01-15
Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less
Hybrid LES/RANS Simulation of Transverse Sonic Injection into a Mach 2 Flow
NASA Technical Reports Server (NTRS)
Boles, John A.; Edwards, Jack R.; Baurle, Robert A.
2008-01-01
A computational study of transverse sonic injection of air and helium into a Mach 1.98 cross-flow is presented. A hybrid large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) turbulence model is used, with the two-equation Menter baseline (Menter-BSL) closure for the RANS part of the flow and a Smagorinsky-type model for the LES part of the flow. A time-dependent blending function, dependent on modeled turbulence variables, is used to shift the closure from RANS to LES. Turbulent structures are initiated and sustained through the use of a recycling / rescaling technique. Two higher-order discretizations, the Piecewise Parabolic Method (PPM) of Colella and Woodward, and the SONIC-A ENO scheme of Suresh and Huyhn are used in the study. The results using the hybrid model show reasonably good agreement with time-averaged Mie scattering data and with experimental surface pressure distributions, even though the penetration of the jet into the cross-flow is slightly over-predicted. The LES/RANS results are used to examine the validity of commonly-used assumptions of constant Schmidt and Prandtl numbers in the intense mixing zone downstream of the injection location.
Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods
NASA Astrophysics Data System (ADS)
Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.
2012-03-01
In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.
NASA Astrophysics Data System (ADS)
Gonzales, Matthew Alejandro
The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.
Integrate and fire neural networks, piecewise contractive maps and limit cycles.
Catsigeras, Eleonora; Guiraud, Pierre
2013-09-01
We study the global dynamics of integrate and fire neural networks composed of an arbitrary number of identical neurons interacting by inhibition and excitation. We prove that if the interactions are strong enough, then the support of the stable asymptotic dynamics consists of limit cycles. We also find sufficient conditions for the synchronization of networks containing excitatory neurons. The proofs are based on the analysis of the equivalent dynamics of a piecewise continuous Poincaré map associated to the system. We show that for efficient interactions the Poincaré map is piecewise contractive. Using this contraction property, we prove that there exist a countable number of limit cycles attracting all the orbits dropping into the stable subset of the phase space. This result applies not only to the Poincaré map under study, but also to a wide class of general n-dimensional piecewise contractive maps.
Trajectory fitting in function space with application to analytic modeling of surfaces
NASA Technical Reports Server (NTRS)
Barger, Raymond L.
1992-01-01
A theory for representing a parameter-dependent function as a function trajectory is described. Additionally, a theory for determining a piecewise analytic fit to the trajectory is described. An example is given that illustrates the application of the theory to generating a smooth surface through a discrete set of input cross-section shapes. A simple procedure for smoothing in the parameter direction is discussed, and a computed example is given. Application of the theory to aerodynamic surface modeling is demonstrated by applying it to a blended wing-fuselage surface.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
Swimming like algae: biomimetic soft artificial cilia
Sareh, Sina; Rossiter, Jonathan; Conn, Andrew; Drescher, Knut; Goldstein, Raymond E.
2013-01-01
Cilia are used effectively in a wide variety of biological systems from fluid transport to thrust generation. Here, we present the design and implementation of artificial cilia, based on a biomimetic planar actuator using soft-smart materials. This actuator is modelled on the cilia movement of the alga Volvox, and represents the cilium as a piecewise constant-curvature robotic actuator that enables the subsequent direct translation of natural articulation into a multi-segment ionic polymer metal composite actuator. It is demonstrated how the combination of optimal segmentation pattern and biologically derived per-segment driving signals reproduce natural ciliary motion. The amenability of the artificial cilia to scaling is also demonstrated through the comparison of the Reynolds number achieved with that of natural cilia. PMID:23097503
NASA Astrophysics Data System (ADS)
Majewski, Kurt
2018-03-01
Exact solutions of the Bloch equations with T1 - and T2 -relaxation terms for piecewise constant magnetic fields are numerically challenging. We therefore investigate an approximation for the achieved magnetization in which rotations and relaxations are split into separate operations. We develop an estimate for its accuracy and explicit first and second order derivatives with respect to the complex excitation radio frequency voltages. In practice, the deviation between an exact solution of the Bloch equations and this rotation relaxation splitting approximation seems negligible. Its computation times are similar to exact solutions without relaxation terms. We apply the developed theory to numerically optimize radio frequency excitation waveforms with T1 - and T2 -relaxations in several examples.
Stock market context of the Lévy walks with varying velocity
NASA Astrophysics Data System (ADS)
Kutner, Ryszard
2002-11-01
We developed the most general Lévy walks with varying velocity, shorter called the Weierstrass walks (WW) model, by which one can describe both stationary and non-stationary stochastic time series. We considered a non-Brownian random walk where the walker moves, in general, with a velocity that assumes a different constant value between the successive turning points, i.e., the velocity is a piecewise constant function. This model is a kind of Lévy walks where we assume a hierarchical, self-similar in a stochastic sense, spatio-temporal representation of the main quantities such as waiting-time distribution and sojourn probability density (which are principal quantities in the continuous-time random walk formalism). The WW model makes possible to analyze both the structure of the Hurst exponent and the power-law behavior of kurtosis. This structure results from the hierarchical, spatio-temporal coupling between the walker displacement and the corresponding time of the walks. The analysis uses both the fractional diffusion and the super Burnett coefficients. We constructed the diffusion phase diagram which distinguishes regions occupied by classes of different universality. We study only such classes which are characteristic for stationary situations. We thus have a model ready for describing the data presented, e.g., in the form of moving averages; the operation is often used for stochastic time series, especially financial ones. The model was inspired by properties of financial time series and tested for empirical data extracted from the Warsaw stock exchange since it offers an opportunity to study in an unbiased way several features of stock exchange in its early stage.
Piecewise adiabatic following in non-Hermitian cycling
NASA Astrophysics Data System (ADS)
Gong, Jiangbin; Wang, Qing-hai
2018-05-01
The time evolution of periodically driven non-Hermitian systems is in general nonunitary but can be stable. It is hence of considerable interest to examine the adiabatic following dynamics in periodically driven non-Hermitian systems. We show in this work the possibility of piecewise adiabatic following interrupted by hopping between instantaneous system eigenstates. This phenomenon is first observed in a computational model and then theoretically explained, using an exactly solvable model, in terms of the Stokes phenomenon. In the latter case, the piecewise adiabatic following is shown to be a genuine critical behavior and the precise phase boundary in the parameter space is located. Interestingly, the critical boundary for piecewise adiabatic following is found to be unrelated to the domain for exceptional points. To characterize the adiabatic following dynamics, we also advocate a simple definition of the Aharonov-Anandan (AA) phase for nonunitary cyclic dynamics, which always yields real AA phases. In the slow driving limit, the AA phase reduces to the Berry phase if adiabatic following persists throughout the driving without hopping, but oscillates violently and does not approach any limit in cases of piecewise adiabatic following. This work exposes the rich features of nonunitary dynamics in cases of slow cycling and should stimulate future applications of nonunitary dynamics.
Piecewise Geometric Estimation of a Survival Function.
1985-04-01
Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2017-01-01
In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.
An unsteady lifting surface method for single rotation propellers
NASA Technical Reports Server (NTRS)
Williams, Marc H.
1990-01-01
The mathematical formulation of a lifting surface method for evaluating the steady and unsteady loads induced on single rotation propellers by blade vibration and inflow distortion is described. The scheme is based on 3-D linearized compressible aerodynamics and presumes that all disturbances are simple harmonic in time. This approximation leads to a direct linear integral relation between the normal velocity on the blade (which is determined from the blade geometry and motion) and the distribution of pressure difference across the blade. This linear relation is discretized by breaking the blade up into subareas (panels) on which the pressure difference is treated as approximately constant, and constraining the normal velocity at one (control) point on each panel. The piece-wise constant loads can then be determined by Gaussian elimination. The resulting blade loads can be used in performance, stability and forced response predictions for the rotor. Mathematical and numerical aspects of the method are examined. A selection of results obtained from the method is presented. The appendices include various details of the derivation that were felt to be secondary to the main development in Section 1.
Metamaterial devices for molding the flow of diffuse light (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wegener, Martin
2016-09-01
Much of optics in the ballistic regime is about designing devices to mold the flow of light. This task is accomplished via specific spatial distributions of the refractive index or the refractive-index tensor. For light propagating in turbid media, a corresponding design approach has not been applied previously. Here, we review our corresponding recent work in which we design spatial distributions of the light diffusivity or the light-diffusivity tensor to accomplish specific tasks. As an application, we realize cloaking of metal contacts on large-area OLEDs, eliminating the contacts' shadows, thereby homogenizing the diffuse light emission. In more detail, metal contacts on large-area organic light-emitting diodes (OLEDs) are mandatory electrically, but they cast optical shadows, leading to unwanted spatially inhomogeneous diffuse light emission. We show that the contacts can be made invisible either by (i) laminate metamaterials designed by coordinate transformations of the diffusion equation or by (ii) triangular-shaped regions with piecewise constant diffusivity, hence constant concentration of scattering centers. These structures are post-optimized in regard to light throughput by Monte-Carlo ray-tracing simulations and successfully validated by model experiments.
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
Computation of free oscillations of the earth
Buland, Raymond P.; Gilbert, F.
1984-01-01
Although free oscillations of the Earth may be computed by many different methods, numerous practical considerations have led us to use a Rayleigh-Ritz formulation with piecewise cubic Hermite spline basis functions. By treating the resulting banded matrix equation as a generalized algebraic eigenvalue problem, we are able to achieve great accuracy and generality and a high degree of automation at a reasonable cost. ?? 1984.
NASA Astrophysics Data System (ADS)
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; Prestridge, Katherine; Adrian, Ronald J.
2018-07-01
We introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficient for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. We apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samet Y. Kadioglu
2011-12-01
We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com; Institut Matematik Kejuruteraan; Saaban, Azizan, E-mail: azizan.s@uum.edu.my
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputedmore » data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.« less
Sullivan, Amanda L; Kohli, Nidhi; Farnsworth, Elyse M; Sadeh, Shanna; Jones, Leila
2017-09-01
Accurate estimation of developmental trajectories can inform instruction and intervention. We compared the fit of linear, quadratic, and piecewise mixed-effects models of reading development among students with learning disabilities relative to their typically developing peers. We drew an analytic sample of 1,990 students from the nationally representative Early Childhood Longitudinal Study-Kindergarten Cohort of 1998, using reading achievement scores from kindergarten through eighth grade to estimate three models of students' reading growth. The piecewise mixed-effects models provided the best functional form of the students' reading trajectories as indicated by model fit indices. Results showed slightly different trajectories between students with learning disabilities and without disabilities, with varying but divergent rates of growth throughout elementary grades, as well as an increasing gap over time. These results highlight the need for additional research on appropriate methods for modeling reading trajectories and the implications for students' response to instruction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Comparison of methods for estimating the attributable risk in the context of survival analysis.
Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M
2017-01-23
The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lee, Hong-Tao
1989-01-01
A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
Class Identification Efficacy in Piecewise GMM with Unknown Turning Points
ERIC Educational Resources Information Center
Ning, Ling; Luo, Wen
2018-01-01
Piecewise GMM with unknown turning points is a new procedure to investigate heterogeneous subpopulations' growth trajectories consisting of distinct developmental phases. Unlike the conventional PGMM, which relies on theory or experiment design to specify turning points a priori, the new procedure allows for an optimal location of turning points…
Forward Field Computation with OpenMEEG
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2011-01-01
To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231
Experiments on Maxwell's fish-eye dynamics in elastic plates
NASA Astrophysics Data System (ADS)
Lefebvre, Gautier; Dubois, Marc; Beauvais, Romain; Achaoui, Younes; Ing, Ros Kiri; Guenneau, Sébastien; Sebbah, Patrick
2015-01-01
We experimentally demonstrate that a Duraluminium thin plate with a thickness profile varying radially in a piecewise constant fashion as h ( r ) = h ( 0 ) ( 1 + (r / R max ) 2 ) 2 , with h(0) = 0.5 mm, h(Rmax) = 2 mm, and Rmax = 10 cm, behaves in many ways as Maxwell's fish-eye lens in optics. Its imaging properties for a Gaussian pulse with central frequencies 30 kHz and 60 kHz are very similar to those predicted by ray trajectories (great circles) on a virtual sphere (rays emanating from the North pole meet at the South pole). However, the refocusing time depends on the carrier frequency as a direct consequence of the dispersive nature of flexural waves in thin plates. Importantly, experimental results are in good agreement with finite-difference-time-domain simulations.
A model of the wall boundary layer for ducted propellers
NASA Technical Reports Server (NTRS)
Eversman, Walter; Moehring, Willi
1987-01-01
The objective of the present study is to include a representation of a wall boundary layer in an existing finite element model of the propeller in the wind tunnel environment. The major consideration is that the new formulation should introduce only modest alterations in the numerical model and should still be capable of producing economical predictions of the radiated acoustic field. This is accomplished by using a stepped approximation in which the velocity profile is piecewise constant in layers. In the limit of infinitesimally thin layers, the velocity profile of the stepped approximation coincides with that of the continuous profile. The approach described here could also be useful in modeling the boundary layer in other duct applications, particularly in the computation of the radiated acoustic field for sources contained in a duct.
Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei
2016-09-01
For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.
Synthetic optimization of air turbine for dental handpieces.
Shi, Z Y; Dong, T
2014-01-01
A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.
Interaction function of oscillating coupled neurons
Dodla, Ramana; Wilson, Charles J.
2013-01-01
Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210
Bi-cubic interpolation for shift-free pan-sharpening
NASA Astrophysics Data System (ADS)
Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano
2013-12-01
Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.
NASA Astrophysics Data System (ADS)
Cinal, M.
2010-01-01
It is found that for closed-l-shell atoms, the exact local exchange potential vx(r) calculated in the exchange-only Kohn-Sham (KS) scheme of the density functional theory (DFT) is very well represented within the region of every atomic shell by each of the suitably shifted potentials obtained with the nonlocal Fock exchange operator for the individual Hartree-Fock (HF) orbitals belonging to this shell. This newly revealed property is not related to the well-known steplike shell structure in the response part of vx(r), but it results from specific relations satisfied by the HF orbital exchange potentials. These relations explain the outstanding proximity of the occupied HF and exchange-only KS orbitals as well as the high quality of the Krieger-Li-Iafrate and localized HF (or, equivalently, common-energy-denominator) approximations to the DFT exchange potential vx(r). Another highly accurate representation of vx(r) is given by the continuous piecewise function built of shell-specific exchange potentials, each defined as the weighted average of the shifted orbital exchange potentials corresponding to a given shell. The constant shifts added to the HF orbital exchange potentials, to map them onto vx(r), are nearly equal to the differences between the energies of the corresponding KS and HF orbitals. It is discussed why these differences are positive and grow when the respective orbital energies become lower for inner orbitals.
Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent
NASA Astrophysics Data System (ADS)
Nec, Yana
2018-01-01
Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory
NASA Astrophysics Data System (ADS)
Stolk, Christiaan C.
2016-06-01
We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.
Revealing Relationships among Relevant Climate Variables with Information Theory
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Golera, Anthony; Curry, Charles T.; Huyser, Karen A.; Kevin R. Wheeler; Rossow, William B.
2005-01-01
The primary objective of the NASA Earth-Sun Exploration Technology Office is to understand the observed Earth climate variability, thus enabling the determination and prediction of the climate's response to both natural and human-induced forcing. We are currently developing a suite of computational tools that will allow researchers to calculate, from data, a variety of information-theoretic quantities such as mutual information, which can be used to identify relationships among climate variables, and transfer entropy, which indicates the possibility of causal interactions. Our tools estimate these quantities along with their associated error bars, the latter of which is critical for describing the degree of uncertainty in the estimates. This work is based upon optimal binning techniques that we have developed for piecewise-constant, histogram-style models of the underlying density functions. Two useful side benefits have already been discovered. The first allows a researcher to determine whether there exist sufficient data to estimate the underlying probability density. The second permits one to determine an acceptable degree of round-off when compressing data for efficient transfer and storage. We also demonstrate how mutual information and transfer entropy can be applied so as to allow researchers not only to identify relations among climate variables, but also to characterize and quantify their possible causal interactions.
NASA Astrophysics Data System (ADS)
Silva, Hector O.; Yunes, Nicolás
2018-01-01
Certain bulk properties of neutron stars, in particular their moment of inertia, rotational quadrupole moment and tidal Love number, when properly normalized, are related to one another in a nearly equation of state independent way. The goal of this paper is to test these relations with extreme equations of state at supranuclear densities constrained to satisfy only a handful of generic, physically sensible conditions. By requiring that the equation of state be (i) barotropic and (ii) its associated speed of sound be real, we construct a piecewise function that matches a tabulated equation of state at low densities, while matching a stiff equation of state parametrized by its sound speed in the high-density region. We show that the I-Love-Q relations hold to 1 percent with this class of equations of state, even in the extreme case where the speed of sound becomes superluminal and independently of the transition density. We also find further support for the interpretation of the I-Love-Q relations as an emergent symmetry due to the nearly constant eccentricity of isodensity contours inside the star. These results reinforce the robustness of the I-Love-Q relations against our current incomplete picture of physics at supranuclear densities, while strengthening our confidence in the applicability of these relations in neutron star astrophysics.
NASA Astrophysics Data System (ADS)
Samaille, T.; Colliot, O.; Cuingnet, R.; Jouvent, E.; Chabriat, H.; Dormont, D.; Chupin, M.
2012-02-01
White matter hyperintensities (WMH), commonly seen on FLAIR images in elderly people, are a risk factor for dementia onset and have been associated with motor and cognitive deficits. We present here a method to fully automatically segment WMH from T1 and FLAIR images. Iterative steps of non linear diffusion followed by watershed segmentation were applied on FLAIR images until convergence. Diffusivity function and associated contrast parameter were carefully designed to adapt to WMH segmentation. It resulted in piecewise constant images with enhanced contrast between lesions and surrounding tissues. Selection of WMH areas was based on two characteristics: 1) a threshold automatically computed for intensity selection, 2) main location of areas in white matter. False positive areas were finally removed based on their proximity with cerebrospinal fluid/grey matter interface. Evaluation was performed on 67 patients: 24 with amnestic mild cognitive impairment (MCI), from five different centres, and 43 with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoaraiosis (CADASIL) acquired in a single centre. Results showed excellent volume agreement with manual delineation (Pearson coefficient: r=0.97, p<0.001) and substantial spatial correspondence (Similarity Index: 72%+/-16%). Our method appeared robust to acquisition differences across the centres as well as to pathological variability.
Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data
Clark, Darin P.; Badea, Cristian T.
2014-01-01
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
Applications of compressed sensing image reconstruction to sparse view phase tomography
NASA Astrophysics Data System (ADS)
Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian
2017-10-01
X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.
On a perturbed Sparre Andersen risk model with multi-layer dividend strategy
NASA Astrophysics Data System (ADS)
Yang, Hu; Zhang, Zhimin
2009-10-01
In this paper, we consider a perturbed Sparre Andersen risk model, in which the inter-claim times are generalized Erlang(n) distributed. Under the multi-layer dividend strategy, piece-wise integro-differential equations for the discounted penalty functions are derived, and a recursive approach is applied to express the solutions. A numerical example to calculate the ruin probabilities is given to illustrate the solution procedure.
Imaging Freeform Optical Systems Designed with NURBS Surfaces
2015-12-01
reflective, anastigmat 1 Introduction The imaging freeform optical systems described here are designed using non-uniform rational basis -spline (NURBS...from piecewise splines. Figure 1 shows a third degree NURBS surface which is formed from cubic basis splines. The surface is defined by the set of...with mathematical details covered by Piegl and Tiller7. Compare this with Gaussian basis functions8 where it is challenging to provide smooth
Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting
NASA Astrophysics Data System (ADS)
Palenichka, Roman M.; Zaremba, Marek B.
2003-03-01
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
A hazard rate analysis of fertility using duration data from Malaysia.
Chang, C
1988-01-01
Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.
What Can Tobit-Piecewise Regression Tell Us about the Determinants of Household Educational Debt?
ERIC Educational Resources Information Center
Thipbharos, Titirut
2014-01-01
Educational debt as part of household debt remains a problem for Thailand. The significant factors of household characteristics with regard to educational debt are shown by constructing a Tobit-piecewise regression for three different clusters, namely poor, middle and affluent households in Thailand. It was found that household debt is likely to…
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2015-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…
ERIC Educational Resources Information Center
Hindman, Annemarie H.; Cromley, Jennifer G.; Skibbe, Lori E.; Miller, Alison L.
2011-01-01
This article reviews the mechanics of conventional and piecewise growth models to demonstrate the unique affordances of each technique for examining the nature and predictors of children's early literacy learning during the transition from preschool through first grade. Using the nationally representative Family and Child Experiences Survey…
Quasi-conformal mapping with genetic algorithms applied to coordinate transformations
NASA Astrophysics Data System (ADS)
González-Matesanz, F. J.; Malpica, J. A.
2006-11-01
In this paper, piecewise conformal mapping for the transformation of geodetic coordinates is studied. An algorithm, which is an improved version of a previous algorithm published by Lippus [2004a. On some properties of piecewise conformal mappings. Eesti NSV Teaduste Akademmia Toimetised Füüsika-Matemaakika 53, 92-98; 2004b. Transformation of coordinates using piecewise conformal mapping. Journal of Geodesy 78 (1-2), 40] is presented; the improvement comes from using a genetic algorithm to partition the complex plane into convex polygons, whereas the original one did so manually. As a case study, the method is applied to the transformation of the Spanish datum ED50 and ETRS89, and both its advantages and disadvantages are discussed herein.
LETTER TO THE EDITOR: Fractal diffusion coefficient from dynamical zeta functions
NASA Astrophysics Data System (ADS)
Cristadoro, Giampaolo
2006-03-01
Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad
2015-06-01
Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS
MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN
2013-01-01
Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935
BLUES function method in computational physics
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Dynamics and stability of a 2D ideal vortex under external strain
NASA Astrophysics Data System (ADS)
Hurst, N. C.; Danielson, J. R.; Dubin, D. H. E.; Surko, C. M.
2017-11-01
The behavior of an initially axisymmetric 2D ideal vortex under an externally imposed strain flow is studied experimentally. The experiments are carried out using electron plasmas confined in a Penning-Malmberg trap; here, the dynamics of the plasma density transverse to the field are directly analogous to the dynamics of vorticity in a 2D ideal fluid. An external strain flow is applied using boundary conditions in a way that is consistent with 2D fluid dynamics. Data are compared to predictions from a theory assuming a piecewise constant elliptical vorticity distribution. Excellent agreement is found for quasi-flat profiles, whereas the dynamics of smooth profiles feature modified stability limits and inviscid damping of periodic elliptical distortions. This work supported by U.S. DOE Grants DE-SC0002451 and DE-SC0016532, and NSF Grant PHY-1414570.
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
Gradient Optimization for Analytic conTrols - GOAT
NASA Astrophysics Data System (ADS)
Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.
NASA Technical Reports Server (NTRS)
Maliassov, Serguei
1996-01-01
In this paper an algebraic substructuring preconditioner is considered for nonconforming finite element approximations of second order elliptic problems in 3D domains with a piecewise constant diffusion coefficient. Using a substructuring idea and a block Gauss elimination, part of the unknowns is eliminated and the Schur complement obtained is preconditioned by a spectrally equivalent very sparse matrix. In the case of quasiuniform tetrahedral mesh an appropriate algebraic multigrid solver can be used to solve the problem with this matrix. Explicit estimates of condition numbers and implementation algorithms are established for the constructed preconditioner. It is shown that the condition number of the preconditioned matrix does not depend on either the mesh step size or the jump of the coefficient. Finally, numerical experiments are presented to illustrate the theory being developed.
Evaluation of trends in wheat yield models
NASA Technical Reports Server (NTRS)
Ferguson, M. C.
1982-01-01
Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.
In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; ...
2018-04-26
In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less
Resonant activation in piecewise linear asymmetric potentials.
Fiasconaro, Alessandro; Spagnolo, Bernardo
2011-04-01
This work analyzes numerically the role played by the asymmetry of a piecewise linear potential, in the presence of both a Gaussian white noise and a dichotomous noise, on the resonant activation phenomenon. The features of the asymmetry of the potential barrier arise by investigating the stochastic transitions far behind the potential maximum, from the initial well to the bottom of the adjacent potential well. Because of the asymmetry of the potential profile together with the random external force uniform in space, we find, for the different asymmetries: (1) an inversion of the curves of the mean first passage time in the resonant region of the correlation time τ of the dichotomous noise, for low thermal noise intensities; (2) a maximum of the mean velocity of the Brownian particle as a function of τ; and (3) an inversion of the curves of the mean velocity and a very weak current reversal in the miniratchet system obtained with the asymmetrical potential profiles investigated. An inversion of the mean first passage time curves is also observed by varying the amplitude of the dichotomous noise, behavior confirmed by recent experiments. ©2011 American Physical Society
A study of different modeling choices for simulating platelets within the immersed boundary method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations – radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations – for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations. PMID:23585704
Affine connection form of Regge calculus
NASA Astrophysics Data System (ADS)
Khatsymovsky, V. M.
2016-12-01
Regge action is represented analogously to how the Palatini action for general relativity (GR) as some functional of the metric and a general connection as independent variables represents the Einstein-Hilbert action. The piecewise flat (or simplicial) spacetime of Regge calculus is equipped with some world coordinates and some piecewise affine metric which is completely defined by the set of edge lengths and the world coordinates of the vertices. The conjugate variables are the general nondegenerate matrices on the three-simplices which play the role of a general discrete connection. Our previous result on some representation of the Regge calculus action in terms of the local Euclidean (Minkowsky) frame vectors and orthogonal connection matrices as independent variables is somewhat modified for the considered case of the general linear group GL(4, R) of the connection matrices. As a result, we have some action invariant w.r.t. arbitrary change of coordinates of the vertices (and related GL(4, R) transformations in the four-simplices). Excluding GL(4, R) connection from this action via the equations of motion we have exactly the Regge action for the considered spacetime.
Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Steinwolf, Alexander
2005-01-01
The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.
NASA Astrophysics Data System (ADS)
Marchenko, I. G.; Marchenko, I. I.; Zhiglo, A. V.
2018-01-01
We present a study of the diffusion enhancement of underdamped Brownian particles in a one-dimensional symmetric space-periodic potential due to external symmetric time-periodic driving with zero mean. We show that the diffusivity can be enhanced by many orders of magnitude at an appropriate choice of the driving amplitude and frequency. The diffusivity demonstrates abnormal (decreasing) temperature dependence at the driving amplitudes exceeding a certain value. At any fixed driving frequency Ω normal temperature dependence of the diffusivity is restored at low enough temperatures, T
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
Generalized cable equation model for myelinated nerve fiber.
Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph
2005-10-01
Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.
Direct AUC optimization of regulatory motifs.
Zhu, Lin; Zhang, Hong-Bo; Huang, De-Shuang
2017-07-15
The discovery of transcription factor binding site (TFBS) motifs is essential for untangling the complex mechanism of genetic variation under different developmental and environmental conditions. Among the huge amount of computational approaches for de novo identification of TFBS motifs, discriminative motif learning (DML) methods have been proven to be promising for harnessing the discovery power of accumulated huge amount of high-throughput binding data. However, they have to sacrifice accuracy for speed and could fail to fully utilize the information of the input sequences. We propose a novel algorithm called CDAUC for optimizing DML-learned motifs based on the area under the receiver-operating characteristic curve (AUC) criterion, which has been widely used in the literature to evaluate the significance of extracted motifs. We show that when the considered AUC loss function is optimized in a coordinate-wise manner, the cost function of each resultant sub-problem is a piece-wise constant function, whose optimal value can be found exactly and efficiently. Further, a key step of each iteration of CDAUC can be efficiently solved as a computational geometry problem. Experimental results on real world high-throughput datasets illustrate that CDAUC outperforms competing methods for refining DML motifs, while being one order of magnitude faster. Meanwhile, preliminary results also show that CDAUC may also be useful for improving the interpretability of convolutional kernels generated by the emerging deep learning approaches for predicting TF sequences specificities. CDAUC is available at: https://drive.google.com/drive/folders/0BxOW5MtIZbJjNFpCeHlBVWJHeW8 . dshuang@tongji.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Zvoch, Keith
2016-01-01
Piecewise growth models (PGMs) were used to estimate and model changes in the preliteracy skill development of kindergartners in a moderately sized school district in the Pacific Northwest. PGMs were applied to interrupted time-series (ITS) data that arose within the context of a response-to-intervention (RtI) instructional framework. During the…
Zero-lag synchronization in coupled time-delayed piecewise linear electronic circuits
NASA Astrophysics Data System (ADS)
Suresh, R.; Srinivasan, K.; Senthilkumar, D. V.; Raja Mohamed, I.; Murali, K.; Lakshmanan, M.; Kurths, J.
2013-07-01
We investigate and report an experimental confirmation of zero-lag synchronization (ZLS) in a system of three coupled time-delayed piecewise linear electronic circuits via dynamical relaying with different coupling configurations, namely mutual and subsystem coupling configurations. We have observed that when there is a feedback between the central unit (relay unit) and at least one of the outer units, ZLS occurs in the two outer units whereas the central and outer units exhibit inverse phase synchronization (IPS). We find that in the case of mutual coupling configuration ZLS occurs both in periodic and hyperchaotic regimes, while in the subsystem coupling configuration it occurs only in the hyperchaotic regime. Snapshots of the time evolution of outer circuits as observed from the oscilloscope confirm the occurrence of ZLS experimentally. The quality of ZLS is numerically verified by correlation coefficient and similarity function measures. Further, the transition to ZLS is verified from the changes in the largest Lyapunov exponents and the correlation coefficient as a function of the coupling strength. IPS is experimentally confirmed using time series plots and also can be visualized using the concept of localized sets which are also corroborated by numerical simulations. In addition, we have calculated the correlation of probability of recurrence to quantify the phase coherence. We have also analytically derived a sufficient condition for the stability of ZLS using the Krasovskii-Lyapunov theory.
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Limit cycles via higher order perturbations for some piecewise differential systems
NASA Astrophysics Data System (ADS)
Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan
2018-05-01
A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.
Low Dose Radiation Cancer Risks: Epidemiological and Toxicological Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
David G. Hoel, PhD
2012-04-19
The basic purpose of this one year research grant was to extend the two stage clonal expansion model (TSCE) of carcinogenesis to exposures other than the usual single acute exposure. The two-stage clonal expansion model of carcinogenesis incorporates the biological process of carcinogenesis, which involves two mutations and the clonal proliferation of the intermediate cells, in a stochastic, mathematical way. The current TSCE model serves a general purpose of acute exposure models but requires numerical computation of both the survival and hazard functions. The primary objective of this research project was to develop the analytical expressions for the survival functionmore » and the hazard function of the occurrence of the first cancer cell for acute, continuous and multiple exposure cases within the framework of the piece-wise constant parameter two-stage clonal expansion model of carcinogenesis. For acute exposure and multiple exposures of acute series, it is either only allowed to have the first mutation rate vary with the dose, or to have all the parameters be dose dependent; for multiple exposures of continuous exposures, all the parameters are allowed to vary with the dose. With these analytical functions, it becomes easy to evaluate the risks of cancer and allows one to deal with the various exposure patterns in cancer risk assessment. A second objective was to apply the TSCE model with varing continuous exposures from the cancer studies of inhaled plutonium in beagle dogs. Using step functions to estimate the retention functions of the pulmonary exposure of plutonium the multiple exposure versions of the TSCE model was to be used to estimate the beagle dog lung cancer risks. The mathematical equations of the multiple exposure versions of the TSCE model were developed. A draft manuscript which is attached provides the results of this mathematical work. The application work using the beagle dog data from plutonium exposure has not been completed due to the fact that the research project did not continue beyond its first year.« less
Liu, Yan; Ma, Jianhua; Zhang, Hao; Wang, Jing; Liang, Zhengrong
2014-01-01
Background The negative effects of X-ray exposure, such as inducing genetic and cancerous diseases, has arisen more attentions. Objective This paper aims to investigate a penalized re-weighted least-square (PRWLS) strategy for low-mAs X-ray computed tomography image reconstruction by incorporating an adaptive weighted total variation (AwTV) penalty term and a noise variance model of projection data. Methods An AwTV penalty is introduced in the objective function by considering both piecewise constant property and local nearby intensity similarity of the desired image. Furthermore, the weight of data fidelity term in the objective function is determined by our recent study on modeling variance estimation of projection data in the presence of electronic background noise. Results The presented AwTV-PRWLS algorithm can achieve the highest full-width-at-half-maximum (FWHM) measurement, for data conditions of (1) full-view 10mA acquisition and (2) sparse-view 80mA acquisition. In comparison between the AwTV/TV-PRWLS strategies and the previous reported AwTV/TV-projection onto convex sets (AwTV/TV-POCS) approaches, the former can gain in terms of FWHM for data condition (1), but cannot gain for the data condition (2). Conclusions In the case of full-view 10mA projection data, the presented AwTV-PRWLS shows potential improvement. However, in the case of sparse-view 80mA projection data, the AwTV/TV-POCS shows advantage over the PRWLS strategies. PMID:25080113
Step-by-step integration for fractional operators
NASA Astrophysics Data System (ADS)
Colinas-Armijo, Natalia; Di Paola, Mario
2018-06-01
In this paper, an approach based on the definition of the Riemann-Liouville fractional operators is proposed in order to provide a different discretisation technique as alternative to the Grünwald-Letnikov operators. The proposed Riemann-Liouville discretisation consists of performing step-by-step integration based upon the discretisation of the function f(t). It has been shown that, as f(t) is discretised as stepwise or piecewise function, the Riemann-Liouville fractional integral and derivative are governing by operators very similar to the Grünwald-Letnikov operators. In order to show the accuracy and capabilities of the proposed Riemann-Liouville discretisation technique and the Grünwald-Letnikov discrete operators, both techniques have been applied to: unit step functions, exponential functions and sample functions of white noise.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
In-flight alignment using H ∞ filter for strapdown INS on aircraft.
Pei, Fu-Jun; Liu, Xuan; Zhu, Li
2014-01-01
In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition.
Maxwell’s demon in the quantum-Zeno regime and beyond
NASA Astrophysics Data System (ADS)
Engelhardt, G.; Schaller, G.
2018-02-01
The long-standing paradigm of Maxwell’s demon is till nowadays a frequently investigated issue, which still provides interesting insights into basic physical questions. Considering a single-electron transistor, where we implement a Maxwell demon by a piecewise-constant feedback protocol, we investigate quantum implications of the Maxwell demon. To this end, we harness a dynamical coarse-graining method, which provides a convenient and accurate description of the system dynamics even for high measurement rates. In doing so, we are able to investigate the Maxwell demon in a quantum-Zeno regime leading to transport blockade. We argue that there is a measurement rate providing an optimal performance. Moreover, we find that besides building up a chemical gradient, there can be also a regime where the feedback loop additionally extracts energy, which results from the energy non-conserving character of the projective measurement.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
The crack problem in bonded nonhomogeneous materials
NASA Technical Reports Server (NTRS)
Erdogan, Fazil; Kaya, A. C.; Joseph, P. F.
1988-01-01
The plane elasticity problem for two bonded half planes containing a crack perpendicular to the interface was considered. The effect of very steep variations in the material properties near the diffusion plane on the singular behavior of the stresses and stress intensity factors were studied. The two materials were thus, assumed to have the shear moduli mu(o) and mu(o) exp (Beta x), x=0 being the diffusion plane. Of particular interest was the examination of the nature of stress singularity near a crack tip terminating at the interface where the shear modulus has a discontinuous derivative. The results show that, unlike the crack problem in piecewise homogeneous materials for which the singularity is of the form r/alpha, 0 less than alpha less than 1, in this problem the stresses have a standard square-root singularity regardless of the location of the crack tip. The nonhomogeneity constant Beta has, however, considerable influence on the stress intensity factors.
The crack problem in bonded nonhomogeneous materials
NASA Technical Reports Server (NTRS)
Erdogan, F.; Joseph, P. F.; Kaya, A. C.
1991-01-01
The plane elasticity problem for two bonded half planes containing a crack perpendicular to the interface was considered. The effect of very steep variations in the material properties near the diffusion plane on the singular behavior of the stresses and stress intensity factors were studied. The two materials were thus, assumed to have the shear moduli mu(o) and mu(o) exp (Beta x), x=0 being the diffusion plane. Of particular interest was the examination of the nature of stress singularity near a crack tip termination at the interface where the shear modulus has a discontinuous derivative. The results show that, unlike the crack problem in piecewise homogeneous materials for which the singularity is of the form r/alpha, 0 less than alpha less than 1, in this problem the stresses have a standard square-root singularity regardless of the location of the crack tip. The nonhomogeneity constant Beta has, however, considerable influence on the stress intensity factors.
NASA Technical Reports Server (NTRS)
Mirels, Harold
1959-01-01
A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
Equilibrium Conformations of Concentric-tube Continuum Robots
Rucker, D. Caleb; Webster, Robert J.; Chirikjian, Gregory S.; Cowan, Noah J.
2013-01-01
Robots consisting of several concentric, preshaped, elastic tubes can work dexterously in narrow, constrained, and/or winding spaces, as are commonly found in minimally invasive surgery. Previous models of these “active cannulas” assume piecewise constant precurvature of component tubes and neglect torsion in curved sections of the device. In this paper we develop a new coordinate-free energy formulation that accounts for general preshaping of an arbitrary number of component tubes, and which explicitly includes both bending and torsion throughout the device. We show that previously reported models are special cases of our formulation, and then explore in detail the implications of torsional flexibility for the special case of two tubes. Experiments demonstrate that this framework is more descriptive of physical prototype behavior than previous models; it reduces model prediction error by 82% over the calibrated bending-only model, and 17% over the calibrated transmissional torsion model in a set of experiments. PMID:25125773
Adaptive control and noise suppression by a variable-gain gradient algorithm
NASA Technical Reports Server (NTRS)
Merhav, S. J.; Mehta, R. S.
1987-01-01
An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.
Total Variation Denoising and Support Localization of the Gradient
NASA Astrophysics Data System (ADS)
Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.
2016-10-01
This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.
Shear waves in inhomogeneous, compressible fluids in a gravity field.
Godin, Oleg A
2014-03-01
While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.
GPS-PWV Estimation and Analysis for CGPS Sites Operating in Mexico
NASA Astrophysics Data System (ADS)
Gutierrez, O.; Vazquez, G. E.; Bennett, R. A.; Adams, D. K.
2014-12-01
Eighty permanent Global Positioning System (GPS) tracking stations that belong to several networks spanning Mexico intended for diverse purposes and applications were used to estimate precipitable water vapor (PWV) using measurement series covering the period of 2000-2014. We extracted the GPS-PWV from the ionosphere-free double-difference carrier phase observations, processed using the GAMIT software. The GPS data were processed with a 30 s sampling rate, 15-degree cutoff angle, and precise GPS orbits disseminated by IGS. The time-varying part of the zenith wet delay was estimated using the Global Mapping Function (GMF), while the constant part is evaluated using the Neil tropospheric model. The data reduction to compute the zenith wet delay follows the step piecewise linear strategy, which is subsequently transformed to PWV estimated every 2-hr. Although there exist previous isolated studies for estimating PWV in Mexico, this study is an attempt to perform a more complete and comprehensive analysis of PWV estimation throughout the Mexican territory. Our resulting GPS-based PWV were compared to available PWV values for 30 stations that operate in Mexico and report the PWV to Suominet. This comparison revealed differences of 1 to 2 mm between the GPS-PWV solution and the PWV reported by Suominet. Accurate values of GPS-PWV will help enhance Mexico ability to investigate water vapor advection, convective and frontal rainfall and long-term climate variability.
NASA Astrophysics Data System (ADS)
Arce, J. C.; Perdomo-Ortiz, A.; Zambrano, M. L.; Mujica-Martínez, C.
2011-03-01
A conceptually appealing and computationally economical course-grained molecular-orbital (MO) theory for extended quasilinear molecular heterostructures is presented. The formalism, which is based on a straightforward adaptation, by including explicitly the vacuum, of the envelope-function approximation widely employed in solid-state physics leads to a mapping of the three-dimensional single-particle eigenvalue equations into simple one-dimensional hole and electron Schrödinger-like equations with piecewise-constant effective potentials and masses. The eigenfunctions of these equations are envelope MO's in which the short-wavelength oscillations present in the full MO's, associated with the atomistic details of the molecular potential, are smoothed out automatically. The approach is illustrated by calculating the envelope MO's of high-lying occupied and low-lying virtual π states in prototypical nanometric heterostructures constituted by oligomers of polyacetylene and polydiacetylene. Comparison with atomistic electronic-structure calculations reveals that the envelope-MO energies agree very well with the energies of the π MO's and that the envelope MO's describe precisely the long-wavelength variations of the π MO's. This envelope MO theory, which is generalizable to extended systems of any dimensionality, is seen to provide a useful tool for the qualitative interpretation and quantitative prediction of the single-particle quantum states in mesoscopic molecular structures and the design of nanometric molecular devices with tailored energy levels and wavefunctions.
NASA Astrophysics Data System (ADS)
Anderson, Daniel M.; McLaughlin, Richard M.; Miller, Cass T.
2018-02-01
We examine a mathematical model of one-dimensional draining of a fluid through a periodically-layered porous medium. A porous medium, initially saturated with a fluid of a high density is assumed to drain out the bottom of the porous medium with a second lighter fluid replacing the draining fluid. We assume that the draining layer is sufficiently dense that the dynamics of the lighter fluid can be neglected with respect to the dynamics of the heavier draining fluid and that the height of the draining fluid, represented as a free boundary in the model, evolves in time. In this context, we neglect interfacial tension effects at the boundary between the two fluids. We show that this problem admits an exact solution. Our primary objective is to develop a homogenization theory in which we find not only leading-order, or effective, trends but also capture higher-order corrections to these effective draining rates. The approximate solution obtained by this homogenization theory is compared to the exact solution for two cases: (1) the permeability of the porous medium varies smoothly but rapidly and (2) the permeability varies as a piecewise constant function representing discrete layers of alternating high/low permeability. In both cases we are able to show that the corrections in the homogenization theory accurately predict the position of the free boundary moving through the porous medium.
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
Geometric constrained variational calculus. II: The second variation (Part I)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
2012-12-01
acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1
Image encryption algorithm based on multiple mixed hash functions and cyclic shift
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhu, Xiaoqiang; Wu, Xiangjun; Zhang, Yingqian
2018-08-01
This paper proposes a new one-time pad scheme for chaotic image encryption that is based on the multiple mixed hash functions and the cyclic-shift function. The initial value is generated using both information of the plaintext image and the chaotic sequences, which are calculated from the SHA1 and MD5 hash algorithms. The scrambling sequences are generated by the nonlinear equations and logistic map. This paper aims to improve the deficiencies of traditional Baptista algorithms and its improved algorithms. We employ the cyclic-shift function and piece-wise linear chaotic maps (PWLCM), which give each shift number the characteristics of chaos, to diffuse the image. Experimental results and security analysis show that the new scheme has better security and can resist common attacks.
Some Properties of Generalized Connections in Quantum Gravity
NASA Astrophysics Data System (ADS)
Velhinho, J. M.
2002-12-01
Theories of connections play an important role in fundamental interactions, including Yang-Mills theories and gravity in the Ashtekar formulation. Typically in such cases, the classical configuration space {A}/ {G} of connections modulo gauge transformations is an infinite dimensional non-linear space of great complexity. Having in mind a rigorous quantization procedure, methods of functional calculus in an extension of {A}/ {G} have been developed. For a compact gauge group G, the compact space /line { {A}{ {/}} {G}} ( ⊃ {A}/ {G}) introduced by Ashtekar and Isham using C*-algebraic methods is a natural candidate to replace {A}/ {G} in the quantum context, 1 allowing the construction of diffeomorphism invariant measures. 2,3,4 Equally important is the space of generalized connections bar {A} introduced in a similar way by Baez. 5 bar {A} is particularly useful for the definition of vector fields in /line { {A}{ {/}} {G}} , fundamental in the construction of quantum observables. 6 These works crucially depend on the use of (generalized) Wilson variables associated to certain types of curves. We will consider the case of piecewise analytic curves, 1,2,5 althought most of the arguments apply equally to the piecewise smooth case. 7,8...
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
NASA Astrophysics Data System (ADS)
Tsao, Yu-Chung
2016-02-01
This study models a joint location, inventory and preservation decision-making problem for non-instantaneous deteriorating items under delay in payments. An outside supplier provides a credit period to the wholesaler which has a distribution system with distribution centres (DCs). The non-instantaneous deteriorating means no deterioration occurs in the earlier stage, which is very useful for items such as fresh food and fruits. This paper also considers that the deteriorating rate will decrease and the reservation cost will increase as the preservation effort increases. Therefore, how much preservation effort should be made is a crucial decision. The objective of this paper is to determine the optimal locations and number of DCs, the optimal replenishment cycle time at DCs, and the optimal preservation effort simultaneously such that the total network profit is maximised. The problem is formulated as piecewise nonlinear functions and has three different cases. Algorithms based on piecewise nonlinear optimisation are provided to solve the joint location and inventory problem for all cases. Computational analysis illustrates the solution procedures and the impacts of the related parameters on decisions and profits. The results of this study can serve as references for business managers or administrators.
Anomaly Detection in Dynamic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turcotte, Melissa
2014-10-14
Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less
Efficient Digital Implementation of The Sigmoidal Function For Artificial Neural Network
NASA Astrophysics Data System (ADS)
Pratap, Rana; Subadra, M.
2011-10-01
An efficient piecewise linear approximation of a nonlinear function (PLAN) is proposed. This uses simulink environment design to perform a direct transformation from X to Y, where X is the input and Y is the approximated sigmoidal output. This PLAN is then used within the outputs of an artificial neural network to perform the nonlinear approximation. In This paper, is proposed a method to implement in FPGA (Field Programmable Gate Array) circuits different approximation of the sigmoid function.. The major benefit of the proposed method resides in the possibility to design neural networks by means of predefined block systems created in System Generator environment and the possibility to create a higher level design tools used to implement neural networks in logical circuits.
Millimeter wave attenuation prediction using a piecewise uniform rain rate model
NASA Technical Reports Server (NTRS)
Persinger, R. R.; Stutzman, W. L.; Bostian, C. W.; Castle, R. E., Jr.
1980-01-01
A piecewise uniform rain rate distribution model is introduced as a quasi-physical model of real rain along earth-space millimeter wave propagation paths. It permits calculation of the total attenuation from specific attenuation in a simple fashion. The model predications are verified by comparison with direct attenuation measurements for several frequencies, elevation angles, and locations. Also, coupled with the Rice-Holmberg rain rate model, attenuation statistics are predicated from rainfall accumulation data.
NASA Astrophysics Data System (ADS)
Zhao, Dan; Wang, Xiaoman; Cheng, Yuan; Liu, Shaogang; Wu, Yanhong; Chai, Liqin; Liu, Yang; Cheng, Qianju
2018-05-01
Piecewise-linear structure can effectively broaden the working frequency band of the piezoelectric energy harvester, and improvement of its research can promote the practical process of energy collection device to meet the requirements for powering microelectronic components. In this paper, the incremental harmonic balance (IHB) method is introduced for the complicated and difficult analysis process of the piezoelectric energy harvester to solve these problems. After obtaining the nonlinear dynamic equation of the single-degree-of-freedom piecewise-linear energy harvester by mathematical modeling and the equation is solved based on the IHB method, the theoretical amplitude-frequency curve of open-circuit voltage is achieved. Under 0.2 g harmonic excitation, a piecewise-linear energy harvester is experimentally tested by unidirectional frequency-increasing scanning. The results demonstrate that the theoretical and experimental amplitudes have the same trend, and the width of the working band with high voltage output are 4.9 Hz and 4.7 Hz, respectively, and the relative error is 4.08%. The open-output peak voltage are 21.53 V and 18.25 V, respectively, and the relative error is 15.23%. Since the theoretical value is consistent with the experimental results, the theoretical model and the incremental harmonic balance method used in this paper are suitable for solving single-degree-of-freedom piecewise-linear piezoelectric energy harvester and can be applied to further parameter optimized design.
Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Kelly; A. Malkhasyan
2010-09-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
NASA Astrophysics Data System (ADS)
Marchuk, A. A.; Sotnikova, N. Y.
2017-03-01
We present a modification of the method for reconstructing the stellar velocity ellipsoid (SVE) in disc galaxies. Our version does not need any parametrization of the velocity dispersion profiles and uses only one assumption that the ratio σz/σR remains constant along the profile or along several pieces of the profile. The method was tested on two galaxies from the sample of other authors and for the first time applied to three lenticular galaxies NGC 1167, NGC 3245 and NGC 4150, as well as to one Sab galaxy NGC 338. We found that for galaxies with a high inclination (I >55° - 60°) it is difficult or rather impossible to extract the information about SVE, while for galaxies at an intermediate inclination the procedure of extracting is successful. For NGC 1167 we managed to reconstruct SVE, provided that the value of σz/σR is piecewise constant. We found σz/σR = 0.7 for the inner parts of the disc and σz/σR = 0.3 for the outskirts. We also obtained a rigid constraint on the value of the radial velocity dispersion σR for highly inclined galaxies, and tested the result using the asymmetric-drift equation, provided that the gas rotation curve is available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
NASA Technical Reports Server (NTRS)
Mehdi, S. Bilal; Puig-Navarro, Javier; Choe, Ronald; Cichella, Venanzio; Hovakimyan, Naira; Chandarana, Meghan; Trujillo, Anna; Rothhaar, Paul M.; Tran, Loc; Neilan, James H.;
2016-01-01
Autonomous operation of UAS holds promise for greater productivity of atmospheric science missions. However, several challenges need to be overcome before such missions can be made autonomous. This paper presents a framework for safe autonomous operations of multiple vehicles, particularly suited for atmospheric science missions. The framework revolves around the use of piecewise Bezier curves for trajectory representation, which in conjunction with path-following and time-coordination algorithms, allows for safe coordinated operations of multiple vehicles.
Global and local curvature in density functional theory.
Zhao, Qing; Ioannidis, Efthymios I; Kulik, Heather J
2016-08-07
Piecewise linearity of the energy with respect to fractional electron removal or addition is a requirement of an electronic structure method that necessitates the presence of a derivative discontinuity at integer electron occupation. Semi-local exchange-correlation (xc) approximations within density functional theory (DFT) fail to reproduce this behavior, giving rise to deviations from linearity with a convex global curvature that is evidence of many-electron, self-interaction error and electron delocalization. Popular functional tuning strategies focus on reproducing piecewise linearity, especially to improve predictions of optical properties. In a divergent approach, Hubbard U-augmented DFT (i.e., DFT+U) treats self-interaction errors by reducing the local curvature of the energy with respect to electron removal or addition from one localized subshell to the surrounding system. Although it has been suggested that DFT+U should simultaneously alleviate global and local curvature in the atomic limit, no detailed study on real systems has been carried out to probe the validity of this statement. In this work, we show when DFT+U should minimize deviations from linearity and demonstrate that a "+U" correction will never worsen the deviation from linearity of the underlying xc approximation. However, we explain varying degrees of efficiency of the approach over 27 octahedral transition metal complexes with respect to transition metal (Sc-Cu) and ligand strength (CO, NH3, and H2O) and investigate select pathological cases where the delocalization error is invisible to DFT+U within an atomic projection framework. Finally, we demonstrate that the global and local curvatures represent different quantities that show opposing behavior with increasing ligand field strength, and we identify where these two may still coincide.
NASA Astrophysics Data System (ADS)
Li, G.; Gordon, I. E.; Rothman, L. S.; Tan, Y.; Hu, S.-M.; Kassi, S.; Campargue, A.
2014-06-01
In order to improve and extend the existing HITRAN database1 and HITEMP2data for carbon monoxide, the ro-vibrational line lists were computed for all transitions of nine isotopologues of the CO molecule, namely 12C16O, 12C17O, 12C18O, 13C16O, 13C17O, 13C18O, 14C16O, 14C17O, and 14C18O in the electronic ground state up to v = 41 and J = 150. Line positions and intensity calculations were carried out using a newly-determined piece-wise dipole moment function (DMF) in conjunction with the wavefunctions calculated from a previous experimentally-determined potential energy function of Coxon and Hajigeorgiou3. Ab initio calculations and a direct-fit method which simultaneously fits all the reliable experimental ro-vibrational matrix elements has been used to construct the piecewise dipole moment function. To provide additional input parameters into the fit, new Cavity Ring Down Spectroscopy experiments were carried out to enable measurements of the lines in the 4-0 band with low uncertainty (Grenoble) as well as the first measurements of lines in the 6-0 band (Hefei). Accurate partition sums have been derived through direct summation for a temperature range from 1 to 9000 Kelvin. A complete set of broadening and shift parameters is also provided and now include parameters induced by CO2 and H2 in order to aid planetary applications. as part of the GNU EPrints system
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Resolving precipitation-induced water content profiles through inversion of dispersive GPR data
NASA Astrophysics Data System (ADS)
Mangel, A. R.; Moysey, S. M.; Van Der Kruk, J.
2015-12-01
Ground-penetrating radar (GPR) has become a popular tool for monitoring hydrologic processes. When monitoring infiltration, the thin wetted zone that occurs near the ground surface at early times may act as a dispersive waveguide. This low-velocity layer traps the GPR waves, causing specific frequencies of the signal to travel at different phase velocities, confounding standard traveltime analysis. In a previous numerical study we demonstrated the potential of dispersion analysis for estimating the depth distribution of waveguide water contents. Here, we evaluate the effectiveness of the methodology when applying it to experimental time-lapse dispersive GPR data collected during a laboratory infiltration experiment in a relatively homogenous soil. A large sand-filled tank is equipped with an automated gantry to independently control the position of 1000 MHz source and receiver antennas. The system was programmed to repeatedly collect a common mid-point (CMP) profile at the center of the tank followed by two constant offset profiles (COP) in the x and y direction. Each collection was completed in 30 s and repeated 50 times during a 28 min experiment. Two minutes after the start of measurements, the surface of the sand was irrigated at a constant flux rate of 0.006 cm/sec for 23 minutes. Time-lapse COPs show increases in traveltime to reflectors in the tank associated with increasing water content, as well as the development of a wetting front reflection. From 4-10 min, the CMPs show a distinct shingling characteristic that is indicative of waveguide dispersion. Forward models where the waveguide is conceptualized as discrete layers and a piece-wise linear function were used to invert picked dispersion curves for waveguide properties. We show the results from both inversion approaches for multiple dispersive CMPs and show how the single layer model fails to represent the gradational nature of the wetting front.
The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps
NASA Astrophysics Data System (ADS)
Simpson, D. J. W.
2018-05-01
In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.
Theoretical and Experimental Study on Wide Range Optical Fiber Turbine Flow Sensor.
Du, Yuhuan; Guo, Yingqing
2016-07-15
In this paper, a novel fiber turbine flow sensor was proposed and demonstrated for liquid measurement with optical fiber, using light intensity modulation to measure the turbine rotational speed for converting to flow rate. The double-circle-coaxial (DCC) fiber probe was introduced in frequency measurement for the first time. Through the divided ratio of two rings light intensity, the interference in light signals acquisition can be eliminated. To predict the characteristics between the output frequency and flow in the nonlinear range, the turbine flow sensor model was built. Via analyzing the characteristics of turbine flow sensor, piecewise linear equations were achieved in expanding the flow measurement range. Furthermore, the experimental verification was tested. The results showed that the flow range ratio of DN20 turbine flow sensor was improved 2.9 times after using piecewise linear in the nonlinear range. Therefore, combining the DCC fiber sensor and piecewise linear method, it can be developed into a strong anti-electromagnetic interference(anti-EMI) and wide range fiber turbine flowmeter.
Evolution of inviscid Kelvin-Helmholtz instability from a piecewise linear shear layer
NASA Astrophysics Data System (ADS)
Guha, Anirban; Rahmani, Mona; Lawrence, Gregory
2012-11-01
Here we study the evolution of 2D, inviscid Kelvin-Helmholtz instability (KH) ensuing from a piecewise linear shear layer. Although KH pertaining to smooth shear layers (eg. Hyperbolic tangent profile) has been thorough investigated in the past, very little is known about KH resulting from sharp shear layers. Pozrikidis and Higdon (1985) have shown that piecewise shear layer evolves into elliptical vortex patches. This non-linear state is dramatically different from the well known spiral-billow structure of KH. In fact, there is a little acknowledgement that elliptical vortex patches can represent non-linear KH. In this work, we show how such patches evolve through the interaction of vorticity waves. Our work is based on two types of computational methods (i) Contour Dynamics: a boundary-element method which tracks the evolution of the contour of a vortex patch using Lagrangian marker points, and (ii) Direct Numerical Simulation (DNS): an Eulerian pseudo-spectral method heavily used in studying hydrodynamic instability and turbulence.
Theoretical and Experimental Study on Wide Range Optical Fiber Turbine Flow Sensor
Du, Yuhuan; Guo, Yingqing
2016-01-01
In this paper, a novel fiber turbine flow sensor was proposed and demonstrated for liquid measurement with optical fiber, using light intensity modulation to measure the turbine rotational speed for converting to flow rate. The double-circle-coaxial (DCC) fiber probe was introduced in frequency measurement for the first time. Through the divided ratio of two rings light intensity, the interference in light signals acquisition can be eliminated. To predict the characteristics between the output frequency and flow in the nonlinear range, the turbine flow sensor model was built. Via analyzing the characteristics of turbine flow sensor, piecewise linear equations were achieved in expanding the flow measurement range. Furthermore, the experimental verification was tested. The results showed that the flow range ratio of DN20 turbine flow sensor was improved 2.9 times after using piecewise linear in the nonlinear range. Therefore, combining the DCC fiber sensor and piecewise linear method, it can be developed into a strong anti-electromagnetic interference(anti-EMI) and wide range fiber turbine flowmeter. PMID:27428976
Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
NASA Astrophysics Data System (ADS)
Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel
2015-06-01
We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.
ERIC Educational Resources Information Center
Grimm, C. A.
This document contains two units that examine integral transforms and series expansions. In the first module, the user is expected to learn how to use the unified method presented to obtain Laplace transforms, Fourier transforms, complex Fourier series, real Fourier series, and half-range sine series for given piecewise continuous functions. In…
Large-deviation properties of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-10-01
We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.
On the stability, storage capacity, and design of nonlinear continuous neural networks
NASA Technical Reports Server (NTRS)
Guez, Allon; Protopopsecu, Vladimir; Barhen, Jacob
1988-01-01
The stability, capacity, and design of a nonlinear continuous neural network are analyzed. Sufficient conditions for existence and asymptotic stability of the network's equilibria are reduced to a set of piecewise-linear inequality relations that can be solved by a feedforward binary network, or by methods such as Fourier elimination. The stability and capacity of the network is characterized by the post synaptic firing rate function. An N-neuron network with sigmoidal firing function is shown to have up to 3N equilibrium points. This offers a higher capacity than the (0.1-0.2)N obtained in the binary Hopfield network. Moreover, it is shown that by a proper selection of the postsynaptic firing rate function, one can significantly extend the capacity storage of the network.
Boundary element modelling of dynamic behavior of piecewise homogeneous anisotropic elastic solids
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Litvinchuk, S. Yu
2018-04-01
A traditional direct boundary integral equations method is applied to solve three-dimensional dynamic problems of piecewise homogeneous linear elastic solids. The materials of homogeneous parts are considered to be generally anisotropic. The technique used to solve the boundary integral equations is based on the boundary element method applied together with the Radau IIA convolution quadrature method. A numerical example of suddenly loaded 3D prismatic rod consisting of two subdomains with different anisotropic elastic properties is presented to verify the accuracy of the proposed formulation.
Nonlinear Deformation of a Piecewise Homogeneous Cylinder Under the Action of Rotation
NASA Astrophysics Data System (ADS)
Akhundov, V. M.; Kostrova, M. M.
2018-05-01
Deformation of a piecewise cylinder under the action of rotation is investigated. The cylinder consists of an elastic matrix with circular fibers of square cross section made of a more rigid elastic material and arranged doubly periodically in the cylinder. Behavior of the cylinder under large displacements and deformations is examined using the equations of a nonlinear elasticity theory for cylinder constituents. The problem posed is solved by the finite-difference method using the method of continuation with respect to the rotational speed of the cylinder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajpathak, Bhooshan, E-mail: bhooshan@ee.iitb.ac.in; Pillai, Harish K., E-mail: hp@ee.iitb.ac.in; Bandyopadhyay, Santanu, E-mail: santanu@me.iitb.ac.in
2015-10-15
In this paper, we analytically examine the unstable periodic orbits and chaotic orbits of the 1-D linear piecewise-smooth discontinuous map. We explore the existence of unstable orbits and the effect of variation in parameters on the coexistence of unstable orbits. Further, we show that this structuring is different from the well known period adding cascade structure associated with the stable periodic orbits of the same map. Further, we analytically prove the existence of chaotic orbit for this map.
1990-11-19
stir divers exemple-s le comportement des filtres l)r0pose5 par ra.)pDort ceux du processus estliner et dti filtre optimal obtenu de fa~on approch6e...Piecewise monotone filtering with small observation noise, Siam J., Control Optim. 20, 261-285, 1989 . Vii [10 W.ll. Fleming and R.W. Rishel...Milbeiro, de Oliveira : Filtres approch~s pour un probl~me de filtrage non lin~aire discret avec petit bruit d’observation,rapport INVRIA, 1142. 1989
Stress state of a piecewise uniform layered space with doubly periodic internal cracks
NASA Astrophysics Data System (ADS)
Hakobyan, V. N.; Dashtoyan, L. L.
2018-04-01
The present paper deals with the stress state of a piecewise homogeneous plane formed by alternation junction of two distinct strips of equal height manufactured of different materials. There is a doubly periodic system of cracks on the plane. The governing system of singular integral equations of the first kind for the density of the crack dislocation is derived. The solution of the problem in the case where only one of the repeated strips contains one doubly-periodic crack is obtained by the method of mechanical quadratures.
NASA Astrophysics Data System (ADS)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
NASA Astrophysics Data System (ADS)
Ren, Xiaodong; Xu, Kun; Shyy, Wei
2016-07-01
This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Anirban; Ganguly, Anindita; Chatterjee, Saumya Deep
2018-04-01
In this paper the authors have dealt with seven kinds of non-linear Volterra and Fredholm classes of equations. The authors have formulated an algorithm for solving the aforementioned equation types via Hybrid Function (HF) and Triangular Function (TF) piecewise-linear orthogonal approach. In this approach the authors have reduced integral equation or integro-differential equation into equivalent system of simultaneous non-linear equation and have employed either Newton's method or Broyden's method to solve the simultaneous non-linear equations. The authors have calculated the L2-norm error and the max-norm error for both HF and TF method for each kind of equations. Through the illustrated examples, the authors have shown that the HF based algorithm produces stable result, on the contrary TF-computational method yields either stable, anomalous or unstable results.
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
Artificial Neural Network and application in calibration transfer of AOTF-based NIR spectrometer
NASA Astrophysics Data System (ADS)
Wang, Wenbo; Jiang, Chengzhi; Xu, Kexin; Wang, Bin
2002-09-01
Chemometrics is widely applied to develop models for quantitative prediction of unknown samples in Near-infrared (NIR) spectroscopy. However, calibrated models generally fail when new instruments are introduced or replacement of the instrument parts occurs. Therefore, calibration transfer becomes necessary to avoid the costly, time-consuming recalibration of models. Piecewise Direct Standardization (PDS) has been proven to be a reference method for standardization. In this paper, Artificial Neural Network (ANN) is employed as an alternative to transfer spectra between instruments. Two Acousto-optic Tunable Filter NIR spectrometers are employed in the experiment. Spectra of glucose solution are collected on the spectrometers through transflectance mode. A Back propagation Network with two layers is employed to simulate the function between instruments piecewisely. Standardization subset is selected by Kennard and Stone (K-S) algorithm in the first two score space of Principal Component Analysis (PCA) of spectra matrix. In current experiment, it is noted that obvious nonlinearity exists between instruments and attempts are made to correct such nonlinear effect. Prediction results before and after successful calibration transfer are compared. Successful transfer can be achieved by adapting window size and training parameters. Final results reveal that ANN is effective in correcting the nonlinear instrumental difference and a only 1.5~2 times larger prediction error is expected after successful transfer.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
NASA Astrophysics Data System (ADS)
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J
2018-01-30
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
NASA Astrophysics Data System (ADS)
Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-02-01
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
In-Flight Alignment Using H ∞ Filter for Strapdown INS on Aircraft
Pei, Fu-Jun; Liu, Xuan; Zhu, Li
2014-01-01
In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition. PMID:24511300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glodzik, Dominik; Morganella, Sandro; Davies, Helen
Somatic rearrangements contribute to the mutagenized landscape of cancer genomes. Here, we systematically interrogated rearrangements in 560 breast cancers by using a piecewise constant fitting approach. We identified 33 hotspots of large (>100 kb) tandem duplications, a mutational signature associated with homologous-recombination-repair deficiency. Notably, these tandem-duplication hotspots were enriched in breast cancer germline susceptibility loci (odds ratio (OR) = 4.28) and breast-specific 'super-enhancer' regulatory elements (OR = 3.54). These hotspots may be sites of selective susceptibility to double-strand-break damage due to high transcriptional activity or, through incrementally increasing copy number, may be sites of secondary selective pressure. Furthermore, the transcriptomicmore » consequences ranged from strong individual oncogene effects to weak but quantifiable multigene expression effects. We thus present a somatic-rearrangement mutational process affecting coding sequences and noncoding regulatory elements and contributing a continuum of driver consequences, from modest to strong effects, thereby supporting a polygenic model of cancer development.« less
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Glodzik, Dominik; Morganella, Sandro; Davies, Helen; ...
2017-01-23
Somatic rearrangements contribute to the mutagenized landscape of cancer genomes. Here, we systematically interrogated rearrangements in 560 breast cancers by using a piecewise constant fitting approach. We identified 33 hotspots of large (>100 kb) tandem duplications, a mutational signature associated with homologous-recombination-repair deficiency. Notably, these tandem-duplication hotspots were enriched in breast cancer germline susceptibility loci (odds ratio (OR) = 4.28) and breast-specific 'super-enhancer' regulatory elements (OR = 3.54). These hotspots may be sites of selective susceptibility to double-strand-break damage due to high transcriptional activity or, through incrementally increasing copy number, may be sites of secondary selective pressure. Furthermore, the transcriptomicmore » consequences ranged from strong individual oncogene effects to weak but quantifiable multigene expression effects. We thus present a somatic-rearrangement mutational process affecting coding sequences and noncoding regulatory elements and contributing a continuum of driver consequences, from modest to strong effects, thereby supporting a polygenic model of cancer development.« less
NASA Astrophysics Data System (ADS)
Admal, Nikhil Chandra; Po, Giacomo; Marian, Jaime
2017-12-01
The standard way of modeling plasticity in polycrystals is by using the crystal plasticity model for single crystals in each grain, and imposing suitable traction and slip boundary conditions across grain boundaries. In this fashion, the system is modeled as a collection of boundary-value problems with matching boundary conditions. In this paper, we develop a diffuse-interface crystal plasticity model for polycrystalline materials that results in a single boundary-value problem with a single crystal as the reference configuration. Using a multiplicative decomposition of the deformation gradient into lattice and plastic parts, i.e. F( X,t)= F L( X,t) F P( X,t), an initial stress-free polycrystal is constructed by imposing F L to be a piecewise constant rotation field R 0( X), and F P= R 0( X)T, thereby having F( X,0)= I, and zero elastic strain. This model serves as a precursor to higher order crystal plasticity models with grain boundary energy and evolution.
New Developments in the Embedded Statistical Coupling Method: Atomistic/Continuum Crack Propagation
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2008-01-01
A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain has been enhanced. The concurrent MD-FEM coupling methodology uses statistical averaging of the deformation of the atomistic MD domain to provide interface displacement boundary conditions to the surrounding continuum FEM region, which, in turn, generates interface reaction forces that are applied as piecewise constant traction boundary conditions to the MD domain. The enhancement is based on the addition of molecular dynamics-based cohesive zone model (CZM) elements near the MD-FEM interface. The CZM elements are a continuum interpretation of the traction-displacement relationships taken from MD simulations using Cohesive Zone Volume Elements (CZVE). The addition of CZM elements to the concurrent MD-FEM analysis provides a consistent set of atomistically-based cohesive properties within the finite element region near the growing crack. Another set of CZVEs are then used to extract revised CZM relationships from the enhanced embedded statistical coupling method (ESCM) simulation of an edge crack under uniaxial loading.
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
NASA Astrophysics Data System (ADS)
Ivashkin, V. V.; Krylov, I. V.
2014-03-01
The problem of optimization of a spacecraft transfer to the Apophis asteroid is investigated. The scheme of transfer under analysis includes a geocentric stage of boosting the spacecraft with high thrust, a heliocentric stage of control by a low thrust engine, and a stage of deceleration with injection to an orbit of the asteroid's satellite. In doing this, the problem of optimal control is solved for cases of ideal and piecewise-constant low thrust, and the optimal magnitude and direction of spacecraft's hyperbolic velocity "at infinity" during departure from the Earth are determined. The spacecraft trajectories are found based on a specially developed comprehensive method of optimization. This method combines the method of dynamic programming at the first stage of analysis and the Pontryagin maximum principle at the concluding stage, together with the parameter continuation method. The estimates are obtained for the spacecraft's final mass and for the payload mass that can be delivered to the asteroid using the Soyuz-Fregat carrier launcher.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Tracking Simulation of Third-Integer Resonant Extraction for Fermilab's Mu2e Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chong Shik; Amundson, James; Michelotti, Leo
2015-02-13
The Mu2e experiment at Fermilab requires acceleration and transport of intense proton beams in order to deliver stable, uniform particle spills to the production target. To meet the experimental requirement, particles will be extracted slowly from the Delivery Ring to the external beamline. Using Synergia2, we have performed multi-particle tracking simulations of third-integer resonant extraction in the Delivery Ring, including space charge effects, physical beamline elements, and apertures. A piecewise linear ramp profile of tune quadrupoles was used to maintain a constant averaged spill rate throughout extraction. To study and minimize beam losses, we implemented and introduced a number ofmore » features, beamline element apertures, and septum plane alignments. Additionally, the RF Knockout (RFKO) technique, which excites particles transversely, is employed for spill regulation. Combined with a feedback system, it assists in fine-tuning spill uniformity. Simulation studies were carried out to optimize the RFKO feedback scheme, which will be helpful in designing the final spill regulation system.« less
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
Coupling of damped and growing modes in unstable shear flow
Fraser, A. E.; Terry, P. W.; Zweibel, E. G.; ...
2017-06-14
Analysis of the saturation of the Kelvin-Helmholtz instability is undertaken to determine the extent to which the conjugate linearly stable mode plays a role. For a piecewise-continuous mean flow profile with constant shear in a fixed layer, it is shown that the stable mode is nonlinearly excited, providing an injection-scale sink of the fluctuation energy similar to what has been found for gyroradius-scale drift-wave turbulence. Quantitative evaluation of the contribution of the stable mode to the energy balance at the onset of saturation shows that nonlinear energy transfer to the stable mode is as significant as energy transfer to smallmore » scales in balancing energy injected into the spectrum by the instability. The effect of the stable mode on momentum transport is quantified by expressing the Reynolds stress in terms of stable and unstable mode amplitudes at saturation, from which it is found that the stable mode can produce a sizable reduction in the momentum flux.« less
Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong
2018-04-12
Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.
Coupling of damped and growing modes in unstable shear flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fraser, A. E.; Terry, P. W.; Zweibel, E. G.
Analysis of the saturation of the Kelvin-Helmholtz instability is undertaken to determine the extent to which the conjugate linearly stable mode plays a role. For a piecewise-continuous mean flow profile with constant shear in a fixed layer, it is shown that the stable mode is nonlinearly excited, providing an injection-scale sink of the fluctuation energy similar to what has been found for gyroradius-scale drift-wave turbulence. Quantitative evaluation of the contribution of the stable mode to the energy balance at the onset of saturation shows that nonlinear energy transfer to the stable mode is as significant as energy transfer to smallmore » scales in balancing energy injected into the spectrum by the instability. The effect of the stable mode on momentum transport is quantified by expressing the Reynolds stress in terms of stable and unstable mode amplitudes at saturation, from which it is found that the stable mode can produce a sizable reduction in the momentum flux.« less
Instabilities in a staircase stratified shear flow
NASA Astrophysics Data System (ADS)
Ponetti, G.; Balmforth, N. J.; Eaves, T. S.
2018-01-01
We study stratified shear flow instability where the density profile takes the form of a staircase of interfaces separating uniform layers. Internal gravity waves riding on density interfaces can resonantly interact due to a background shear flow, resulting in the Taylor-Caulfield instability. The many steps of the density profile permit a multitude of interactions between different interfaces, and a rich variety of Taylor-Caulfield instabilities. We analyse the linear instability of a staircase with piecewise-constant density profile embedded in a background linear shear flow, locating all the unstable modes and identifying the strongest. The interaction between nearest-neighbour interfaces leads to the most unstable modes. The nonlinear dynamics of the instabilities are explored in the long-wavelength, weakly stratified limit (the defect approximation). Unstable modes on adjacent interfaces saturate by rolling up the intervening layer into a distinctive billow. These nonlinear structures coexist when stacked vertically and are bordered by the sharp density gradients that are the remnants of the steps of the original staircase. Horizontal averages remain layer-like.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Central Limit Theorems for the Shrinking Target Problem
NASA Astrophysics Data System (ADS)
Haydn, Nicolai; Nicol, Matthew; Vaienti, Sandro; Zhang, Licheng
2013-12-01
Suppose B i := B( p, r i ) are nested balls of radius r i about a point p in a dynamical system ( T, X, μ). The question of whether T i x∈ B i infinitely often (i.o.) for μ a.e. x is often called the shrinking target problem. In many dynamical settings it has been shown that if diverges then there is a quantitative rate of entry and for μ a.e. x∈ X. This is a self-norming type of strong law of large numbers. We establish self-norming central limit theorems (CLT) of the form (in distribution) for a variety of hyperbolic and non-uniformly hyperbolic dynamical systems, the normalization constants are . Dynamical systems to which our results apply include smooth expanding maps of the interval, Rychlik type maps, Gibbs-Markov maps, rational maps and, in higher dimensions, piecewise expanding maps. For such central limit theorems the main difficulty is to prove that the non-stationary variance has a limit in probability.
A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems
NASA Astrophysics Data System (ADS)
Liu, Zuolin; Xu, Jian
2018-04-01
In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
NASA Astrophysics Data System (ADS)
Tan, Yimin; Lin, Kejian; Zu, Jean W.
2018-05-01
Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Boys, C A; Robinson, W; Miller, B; Pflugrath, B; Baumgartner, L J; Navarro, A; Brown, R; Deng, Z
2016-05-01
A piecewise regression approach was used to objectively quantify barotrauma injury thresholds in two physoclistous species, Murray cod Maccullochella peelii and silver perch Bidyanus bidyanus, following simulated infrastructure passage in a barometric chamber. The probability of injuries such as swimbladder rupture, exophthalmia and haemorrhage, and emphysema in various organs increased as the ratio between the lowest exposure pressure and the acclimation pressure (ratio of pressure change, R(NE:A) ) reduced. The relationship was typically non-linear and piecewise regression was able to quantify thresholds in R(NE:A) that once exceeded resulted in a substantial increase in barotrauma injury. Thresholds differed among injury types and between species but by applying a multispecies precautionary principle, the maintenance of exposure pressures at river infrastructure above 70% of acclimation pressure (R(NE:A) of 0·7) should protect downstream migrating juveniles of these two physoclistous species sufficiently. These findings have important implications for determining the risk posed by current infrastructures and informing the design and operation of new ones. © 2016 The Fisheries Society of the British Isles.
NASA Astrophysics Data System (ADS)
Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji
2015-06-01
We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, H.
In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications ofmore » the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability).« less
A projection method for coupling two-phase VOF and fluid structure interaction simulations
NASA Astrophysics Data System (ADS)
Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro
2018-02-01
The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Fang, E-mail: fliu@lsec.cc.ac.cn; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit ofmore » using different self energy expressions to perform the numerical convolution at different frequencies.« less
Stability of barotropic vortex strip on a rotating sphere
Sohn, Sung-Ik; Kim, Sun-Chul
2018-01-01
We study the stability of a barotropic vortex strip on a rotating sphere, as a simple model of jet streams. The flow is approximated by a piecewise-continuous vorticity distribution by zonal bands of uniform vorticity. The linear stability analysis shows that the vortex strip becomes stable as the strip widens or the rotation speed increases. When the vorticity constants in the upper and the lower regions of the vortex strip have the same positive value, the inner flow region of the vortex strip becomes the most unstable. However, when the upper and the lower vorticity constants in the polar regions have different signs, a complex pattern of instability is found, depending on the wavenumber of perturbations, and interestingly, a boundary far away from the vortex strip can be unstable. We also compute the nonlinear evolution of the vortex strip on the rotating sphere and compare with the linear stability analysis. When the width of the vortex strip is small, we observe a good agreement in the growth rate of perturbation at an early time, and the eigenvector corresponding to the unstable eigenvalue coincides with the most unstable part of the flow. We demonstrate that a large structure of rolling-up vortex cores appears in the vortex strip after a long-time evolution. Furthermore, the geophysical relevance of the model to jet streams of Jupiter, Saturn and Earth is examined. PMID:29507524
Stability of barotropic vortex strip on a rotating sphere.
Sohn, Sung-Ik; Sakajo, Takashi; Kim, Sun-Chul
2018-02-01
We study the stability of a barotropic vortex strip on a rotating sphere, as a simple model of jet streams. The flow is approximated by a piecewise-continuous vorticity distribution by zonal bands of uniform vorticity. The linear stability analysis shows that the vortex strip becomes stable as the strip widens or the rotation speed increases. When the vorticity constants in the upper and the lower regions of the vortex strip have the same positive value, the inner flow region of the vortex strip becomes the most unstable. However, when the upper and the lower vorticity constants in the polar regions have different signs, a complex pattern of instability is found, depending on the wavenumber of perturbations, and interestingly, a boundary far away from the vortex strip can be unstable. We also compute the nonlinear evolution of the vortex strip on the rotating sphere and compare with the linear stability analysis. When the width of the vortex strip is small, we observe a good agreement in the growth rate of perturbation at an early time, and the eigenvector corresponding to the unstable eigenvalue coincides with the most unstable part of the flow. We demonstrate that a large structure of rolling-up vortex cores appears in the vortex strip after a long-time evolution. Furthermore, the geophysical relevance of the model to jet streams of Jupiter, Saturn and Earth is examined.
Identification of cascade water tanks using a PWARX model
NASA Astrophysics Data System (ADS)
Mattsson, Per; Zachariah, Dave; Stoica, Petre
2018-06-01
In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.
NASA Astrophysics Data System (ADS)
Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.
2018-05-01
In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.
Trajectory Generation by Piecewise Spline Interpolation
1976-04-01
Lx) -a 0 + atx + aAx + x (21)0 1 2 3 and the coefficients are obtained from Equation (20) as ao m fl (22)i al " fi, (23) S3(fi + I f ) 2fj + fj+ 1 (24...reference frame to the vehicle fixed frame is pTO’ 0TO’ OTO’ *TO where a if (gZv0 - A >- 0 aCI (64) - azif (gzv0- AzvO < 0 These rotations may be...velocity frame axes directions (velocity frame from the output frame) aO, al , a 2 , a 3 Coefficients of the piecewise cubic polynomials [B ] Tridiagonal
Advanced control concepts. [for shuttle ascent vehicles
NASA Technical Reports Server (NTRS)
Sharp, J. B.; Coppey, J. M.
1973-01-01
The problems of excess control devices and insufficient trim control capability on shuttle ascent vehicles were investigated. The trim problem is solved at all time points of interest using Lagrangian multipliers and a Simplex based iterative algorithm developed as a result of the study. This algorithm has the capability to solve any bounded linear problem with physically realizable constraints, and to minimize any piecewise differentiable cost function. Both solution methods also automatically distribute the command torques to the control devices. It is shown that trim requirements are unrealizable if only the orbiter engines and the aerodynamic surfaces are used.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
First Instances of Generalized Expo-Rational Finite Elements on Triangulations
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre
2011-12-01
In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.
On estimating the effects of clock instability with flicker noise characteristics
NASA Technical Reports Server (NTRS)
Wu, S. C.
1981-01-01
A scheme for flicker noise generation is given. The second approach is that of successive segmentation: A clock fluctuation is represented by 2N piecewise linear segments and then converted into a summation of N+1 triangular pulse train functions. The statistics of the clock instability are then formulated in terms of two sample variances at N+1 specified averaging times. The summation converges very rapidly that a value of N 6 is seldom necessary. An application to radio interferometric geodesy shows excellent agreement between the two approaches. Limitations to and the relative merits of the two approaches are discussed.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
Effect of long-term antibiotic use on weight in adolescents with acne
Contopoulos-Ioannidis, Despina G.; Ley, Catherine; Wang, Wei; Ma, Ting; Olson, Clifford; Shi, Xiaoli; Luft, Harold S.; Hastie, Trevor; Parsonnet, Julie
2016-01-01
Objectives Antibiotics increase weight in farm animals and may cause weight gain in humans. We used electronic health records from a large primary care organization to determine the effect of antibiotics on weight and BMI in healthy adolescents with acne. Methods We performed a retrospective cohort study of adolescents with acne prescribed ≥4 weeks of oral antibiotics with weight measurements within 18 months pre-antibiotics and 12 months post-antibiotics. We compared within-individual changes in weight-for-age Z-scores (WAZs) and BMI-for-age Z-scores (BMIZs). We used: (i) paired t-tests to analyse changes between the last pre-antibiotics versus the first post-antibiotic measurements; (ii) piecewise-constant-mixed models to capture changes between mean measurements pre- versus post-antibiotics; (iii) piecewise-linear-mixed models to capture changes in trajectory slopes pre- versus post-antibiotics; and (iv) χ2 tests to compare proportions of adolescents with ≥0.2 Z-scores WAZ or BMIZ increase or decrease. Results Our cohort included 1012 adolescents with WAZs; 542 also had BMIZs. WAZs decreased post-antibiotics in all analyses [change between last WAZ pre-antibiotics versus first WAZ post-antibiotics = −0.041 Z-scores (P < 0.001); change between mean WAZ pre- versus post-antibiotics = −0.050 Z-scores (P < 0.001); change in WAZ trajectory slopes pre- versus post-antibiotics = −0.025 Z-scores/6 months (P = 0.002)]. More adolescents had a WAZ decrease post-antibiotics ≥0.2 Z-scores than an increase (26% versus 18%; P < 0.001). Trends were similar, though not statistically significant, for BMIZ changes. Conclusions Contrary to original expectations, long-term antibiotic use in healthy adolescents with acne was not associated with weight gain. This finding, which was consistent across all analyses, does not support a weight-promoting effect of antibiotics in adolescents. PMID:26782773
2013-01-01
Background Designs and analyses of clinical trials with a time-to-event outcome almost invariably rely on the hazard ratio to estimate the treatment effect and implicitly, therefore, on the proportional hazards assumption. However, the results of some recent trials indicate that there is no guarantee that the assumption will hold. Here, we describe the use of the restricted mean survival time as a possible alternative tool in the design and analysis of these trials. Methods The restricted mean is a measure of average survival from time 0 to a specified time point, and may be estimated as the area under the survival curve up to that point. We consider the design of such trials according to a wide range of possible survival distributions in the control and research arm(s). The distributions are conveniently defined as piecewise exponential distributions and can be specified through piecewise constant hazards and time-fixed or time-dependent hazard ratios. Such designs can embody proportional or non-proportional hazards of the treatment effect. Results We demonstrate the use of restricted mean survival time and a test of the difference in restricted means as an alternative measure of treatment effect. We support the approach through the results of simulation studies and in real examples from several cancer trials. We illustrate the required sample size under proportional and non-proportional hazards, also the significance level and power of the proposed test. Values are compared with those from the standard approach which utilizes the logrank test. Conclusions We conclude that the hazard ratio cannot be recommended as a general measure of the treatment effect in a randomized controlled trial, nor is it always appropriate when designing a trial. Restricted mean survival time may provide a practical way forward and deserves greater attention. PMID:24314264
Timing of continuous motor imagery: the two-thirds power law originates in trajectory planning.
Karklinsky, Matan; Flash, Tamar
2015-04-01
The two-thirds power law, v = γκ(-1/3), expresses a robust local relationship between the geometrical and temporal aspects of human movement, represented by curvature κ and speed v, with a piecewise constant γ. This law is equivalent to moving at a constant equi-affine speed and thus constitutes an important example of motor invariance. Whether this kinematic regularity reflects central planning or peripheral biomechanical effects has been strongly debated. Motor imagery, i.e., forming mental images of a motor action, allows unique access to the temporal structure of motor planning. Earlier studies have shown that imagined discrete movements obey Fitts's law and their durations are well correlated with those of actual movements. Hence, it is natural to examine whether the temporal properties of continuous imagined movements comply with the two-thirds power law. A novel experimental paradigm for recording sparse imagery data from a continuous cyclic tracing task was developed. Using the likelihood ratio test, we concluded that for most subjects the distributions of the marked positions describing the imagined trajectory were significantly better explained by the two-thirds power law than by a constant Euclidean speed or by two other power law models. With nonlinear regression, the β parameter values in a generalized power law, v = γκ(-β), were inferred from the marked position records. This resulted in highly variable yet mostly positive β values. Our results imply that imagined trajectories do follow the two-thirds power law. Our findings therefore support the conclusion that the coupling between velocity and curvature originates in centrally represented motion planning. Copyright © 2015 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Linh; Borghi, Giovanni; Ferretti, Andrea; Marzari, Nicola
The determination of spectral properties of the DNA and RNA nucleobases from first principles can provide theoretical interpretation for experimental data, but requires complex electronic-structure formulations that fall outside the domain of applicability of common approaches such as density-functional theory. In this work, we show that Koopmans-compliant functionals, constructed to enforce piecewise linearity in energy functionals with respect to fractional occupation-i.e., with respect to charged excitations-can predict not only frontier ionization potentials and electron affinities of the nucleobases with accuracy comparable or superior with that of many-body perturbation theory and high-accuracy quantum chemistry methods, but also the molecular photoemission spectra are shown to be in excellent agreement with experimental ultraviolet photoemsision spectroscopy data. The results highlight the role of Koopmans-compliant functionals as accurate and inexpensive quasiparticle approximations to the spectral potential, which transform DFT into a novel dynamical formalism where electronic properties, and not only total energies, can be correctly accounted for.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-01-01
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
NASA Technical Reports Server (NTRS)
Kvernadze, George; Hagstrom,Thomas; Shapiro, Henry
1997-01-01
A key step for some methods dealing with the reconstruction of a function with jump discontinuities is the accurate approximation of the jumps and their locations. Various methods have been suggested in the literature to obtain this valuable information. In the present paper, we develop an algorithm based on identities which determine the jumps of a 2(pi)-periodic bounded not-too-highly oscillating function by the partial sums of its differentiated Fourier series. The algorithm enables one to approximate the locations of discontinuities and the magnitudes of jumps of a bounded function. We study the accuracy of approximation and establish asymptotic expansions for the approximations of a 27(pi)-periodic piecewise smooth function with one discontinuity. By an appropriate linear combination, obtained via derivatives of different order, we significantly improve the accuracy. Next, we use Richardson's extrapolation method to enhance the accuracy even more. For a function with multiple discontinuities we establish simple formulae which "eliminate" all discontinuities of the function but one. Then we treat the function as if it had one singularity following the method described above.
Chua's Equation was Proved to BE Chaotic in Two Years, Lorenz Equation in Thirty Six Years
NASA Astrophysics Data System (ADS)
Muthuswamy, Bharathwaj
2013-01-01
Although there are probably more publications on Chua's circuit than any other chaotic circuit, a tutorial with a historical emphasis is still lacking. Hence the goal of this chapter is to provide such a tutorial. This chapter will prove useful for a novice who is looking to understand the basics behind chaotic circuits without too much technical details. The chapter also includes a cookbook approach to a rigorous proof of chaos in piecewise-linear systems. The proof is a summary of the original piecewise-linear proof of chaos in Chua's circuit. The chapter concludes with a discussion of circuits derived from Chua's circuit.
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boys, Craig A.; Robinson, Wayne; Miller, Brett
2016-05-13
Barotrauma injury can occur when fish are exposed to rapid decompression during downstream passage through river infrastructure. A piecewise regression approach was used to objectively quantify barotrauma injury thresholds in two physoclistous species (Murray cod Maccullochella peelii and silver perch Bidyanus bidyanus) following simulated infrastructure passage in barometric chambers. The probability of injuries such as swim bladder rupture; exophthalmia; and haemorrhage and emphysema in various organs increased as the ratio between the lowest exposure pressure and the acclimation pressure (ratio of pressure change RPCE/A) fell. The relationship was typically non-linear and piecewise regression was able to quantify thresholds in RPCE/Amore » that once exceeded resulted in a substantial increase in barotrauma injury. Thresholds differed among injury types and between species but by applying a multi-species precautionary principle, the maintenance of exposure pressures at river infrastructure above 70% of acclimation pressure (RPCE/A of 0.7) should sufficiently protect downstream migrating juveniles of these two physoclistous species. These findings have important implications for determining the risk posed by current infrastructures and informing the design and operation of new ones.« less
Design of efficient stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Majumder, D. K.; Thornton, W. A.
1976-01-01
A method to produce efficient piecewise uniform stiffened shells of revolution is presented. The approach uses a first order differential equation formulation for the shell prebuckling and buckling analyses and the necessary conditions for an optimum design are derived by a variational approach. A variety of local yielding and buckling constraints and the general buckling constraint are included in the design process. The local constraints are treated by means of an interior penalty function and the general buckling load is treated by means of an exterior penalty function. This allows the general buckling constraint to be included in the design process only when it is violated. The self-adjoint nature of the prebuckling and buckling formulations is used to reduce the computational effort. Results for four conical shells and one spherical shell are given.
Generation of three-dimensional delaunay meshes from weakly structured and inconsistent data
NASA Astrophysics Data System (ADS)
Garanzha, V. A.; Kudryavtseva, L. N.
2012-03-01
A method is proposed for the generation of three-dimensional tetrahedral meshes from incomplete, weakly structured, and inconsistent data describing a geometric model. The method is based on the construction of a piecewise smooth scalar function defining the body so that its boundary is the zero isosurface of the function. Such implicit description of three-dimensional domains can be defined analytically or can be constructed from a cloud of points, a set of cross sections, or a "soup" of individual vertices, edges, and faces. By applying Boolean operations over domains, simple primitives can be combined with reconstruction results to produce complex geometric models without resorting to specialized software. Sharp edges and conical vertices on the domain boundary are reproduced automatically without using special algorithms. Refs. 42. Figs. 25.
Feynman-Kac formula for stochastic hybrid systems.
Bressloff, Paul C
2017-01-01
We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, H; Menon, G; Sloboda, R
The purpose of this study was to investigate the accuracy of radiochromic film calibration procedures used in external beam radiotherapy when applied to I-125 brachytherapy sources delivering higher doses, and to determine any necessary modifications to achieve similar accuracy in absolute dose measurements. GafChromic EBT3 film was used to measure radiation doses upwards of 35 Gy from 6 MV, 75 kVp and (∼28 keV) I-125 photon sources. A custom phantom was used for the I-125 irradiations to obtain a larger film area with nearly constant dose to reduce the effects of film heterogeneities on the optical density (OD) measurements. RGBmore » transmission images were obtained with an Epson 10000XL flatbed scanner, and calibration curves relating OD and dose using a rational function were determined for each colour channel and at each energy using a non-linear least square minimization method. Differences found between the 6 MV calibration curve and those for the lower energy sources are large enough that 6 MV beams should not be used to calibrate film for low-energy sources. However, differences between the 75 kVp and I-125 calibration curves were quite small; indicating that 75 kVp is a good choice. Compared with I-125 irradiation, this gives the advantages of lower type B uncertainties and markedly reduced irradiation time. To obtain high accuracy calibration for the dose range up to 35 Gy, two-segment piece-wise fitting was required. This yielded absolute dose measurement accuracy above 1 Gy of ∼2% for 75 kVp and ∼5% for I-125 seed exposures.« less
Changes in Clavicle Length and Maturation in Americans: 1840-1980.
Langley, Natalie R; Cridlin, Sandra
2016-01-01
Secular changes refer to short-term biological changes ostensibly due to environmental factors. Two well-documented secular trends in many populations are earlier age of menarche and increasing stature. This study synthesizes data on maximum clavicle length and fusion of the medial epiphysis in 1840-1980 American birth cohorts to provide a comprehensive assessment of developmental and morphological change in the clavicle. Clavicles from the Hamann-Todd Human Osteological Collection (n = 354), McKern and Stewart Korean War males (n = 341), Forensic Anthropology Data Bank (n = 1,239), and the McCormick Clavicle Collection (n = 1,137) were used in the analysis. Transition analysis was used to evaluate fusion of the medial epiphysis (scored as unfused, fusing, or fused). Several statistical treatments were used to assess fluctuations in maximum clavicle length. First, Durbin-Watson tests were used to evaluate autocorrelation, and a local regression (LOESS) was used to identify visual shifts in the regression slope. Next, piecewise regression was used to fit linear regression models before and after the estimated breakpoints. Multiple starting parameters were tested in the range determined to contain the breakpoint, and the model with the smallest mean squared error was chosen as the best fit. The parameters from the best-fit models were then used to derive the piecewise models, which were compared with the initial simple linear regression models to determine which model provided the best fit for the secular change data. The epiphyseal union data indicate a decline in the age at onset of fusion since the early twentieth century. Fusion commences approximately four years earlier in mid- to late twentieth-century birth cohorts than in late nineteenth- and early twentieth-century birth cohorts. However, fusion is completed at roughly the same age across cohorts. The most significant decline in age at onset of epiphyseal union appears to have occurred since the mid-twentieth century. LOESS plots show a breakpoint in the clavicle length data around the mid-twentieth century in both sexes, and piecewise regression models indicate a significant decrease in clavicle length in the American population after 1940. The piecewise model provides a slightly better fit than the simple linear model. Since the model standard error is not substantially different from the piecewise model, an argument could be made to select the less complex linear model. However, we chose the piecewise model to detect changes in clavicle length that are overfitted with a linear model. The decrease in maximum clavicle length is in line with a documented narrowing of the American skeletal form, as shown by analyses of cranial and facial breadth and bi-iliac breadth of the pelvis. Environmental influences on skeletal form include increases in body mass index, health improvements, improved socioeconomic status, and elimination of infectious diseases. Secular changes in bony dimensions and skeletal maturation stipulate that medical and forensic standards used to deduce information about growth, health, and biological traits must be derived from modern populations.
NASA Technical Reports Server (NTRS)
Richmond, J. H.
1974-01-01
Piecewise-sinusoidal expansion functions and Galerkin's method are employed to formulate a solution for an arbitrary thin-wire configuration in a homogeneous conducting medium. The analysis is performed in the real or complex frequency domain. In antenna problems, the solution determines the current distribution, impedance, radiation efficiency, gain and far-field patterns. In scattering problems, the solution determines the absorption cross section, scattering cross section and the polarization scattering matrix. The electromagnetic theory is presented for thin wires and the forward-scattering theorem is developed for an arbitrary target in a homogeneous conducting medium.
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
Self-sustained peristaltic waves: Explicit asymptotic solutions
NASA Astrophysics Data System (ADS)
Dudchenko, O. A.; Guria, G. Th.
2012-02-01
A simple nonlinear model for the coupled problem of fluid flow and contractile wall deformation is proposed to describe peristalsis. In the context of the model the ability of a transporting system to perform autonomous peristaltic pumping is interpreted as the ability to propagate sustained waves of wall deformation. Piecewise-linear approximations of nonlinear functions are used to analytically demonstrate the existence of traveling-wave solutions. Explicit formulas are derived which relate the speed of self-sustained peristaltic waves to the rheological properties of the transporting vessel and the transported fluid. The results may contribute to the development of diagnostic and therapeutic procedures for cases of peristaltic motility disorders.
Wave reflection in a reaction-diffusion system: breathing patterns and attenuation of the echo.
Tsyganov, M A; Ivanitsky, G R; Zemskov, E P
2014-05-01
Formation and interaction of the one-dimensional excitation waves in a reaction-diffusion system with the piecewise linear reaction functions of the Tonnelier-Gerstner type are studied. We show that there exists a parameter region where the established regime of wave propagation depends on initial conditions. Wave phenomena with a complex behavior are found: (i) the reflection of waves at a growing distance (the remote reflection) upon their collision with each other or with no-flux boundaries and (ii) the periodic transformation of waves with the jumping from one regime of wave propagation to another (the periodic trigger wave).
Wave reflection in a reaction-diffusion system: Breathing patterns and attenuation of the echo
NASA Astrophysics Data System (ADS)
Tsyganov, M. A.; Ivanitsky, G. R.; Zemskov, E. P.
2014-05-01
Formation and interaction of the one-dimensional excitation waves in a reaction-diffusion system with the piecewise linear reaction functions of the Tonnelier-Gerstner type are studied. We show that there exists a parameter region where the established regime of wave propagation depends on initial conditions. Wave phenomena with a complex behavior are found: (i) the reflection of waves at a growing distance (the remote reflection) upon their collision with each other or with no-flux boundaries and (ii) the periodic transformation of waves with the jumping from one regime of wave propagation to another (the periodic trigger wave).
Gradient-based controllers for timed continuous Petri nets
NASA Astrophysics Data System (ADS)
Lefebvre, Dimitri; Leclercq, Edouard; Druaux, Fabrice; Thomas, Philippe
2015-07-01
This paper is about control design for timed continuous Petri nets that are described as piecewise affine systems. In this context, the marking vector is considered as the state space vector, weighted marking of place subsets are defined as the model outputs and the model inputs correspond to multiplicative control actions that slow down the firing rate of some controllable transitions. Structural and functional sensitivity of the outputs with respect to the inputs are discussed in terms of Petri nets. Then, gradient-based controllers (GBC) are developed in order to adapt the control actions of the controllable transitions according to desired trajectories of the outputs.
Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.
Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E
2018-03-01
Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.
Optimization and universality of Brownian search in a basic model of quenched heterogeneous media
NASA Astrophysics Data System (ADS)
Godec, Aljaž; Metzler, Ralf
2015-05-01
The kinetics of a variety of transport-controlled processes can be reduced to the problem of determining the mean time needed to arrive at a given location for the first time, the so-called mean first-passage time (MFPT) problem. The occurrence of occasional large jumps or intermittent patterns combining various types of motion are known to outperform the standard random walk with respect to the MFPT, by reducing oversampling of space. Here we show that a regular but spatially heterogeneous random walk can significantly and universally enhance the search in any spatial dimension. In a generic minimal model we consider a spherically symmetric system comprising two concentric regions with piecewise constant diffusivity. The MFPT is analyzed under the constraint of conserved average dynamics, that is, the spatially averaged diffusivity is kept constant. Our analytical calculations and extensive numerical simulations demonstrate the existence of an optimal heterogeneity minimizing the MFPT to the target. We prove that the MFPT for a random walk is completely dominated by what we term direct trajectories towards the target and reveal a remarkable universality of the spatially heterogeneous search with respect to target size and system dimensionality. In contrast to intermittent strategies, which are most profitable in low spatial dimensions, the spatially inhomogeneous search performs best in higher dimensions. Discussing our results alongside recent experiments on single-particle tracking in living cells, we argue that the observed spatial heterogeneity may be beneficial for cellular signaling processes.
A multidimensional anisotropic strength criterion based on Kelvin modes
NASA Astrophysics Data System (ADS)
Arramon, Yves Pierre
A new theory for the prediction of multiaxial strength of anisotropic elastic materials was proposed by Biegler and Mehrabadi (1993). This theory is based on the premise that the total elastic strain energy of an anisotropic material subjected to multiaxial stress can be decomposed into dilatational and deviatoric modes. A multidimensional strength criterion may thus be formulated by postulating that the failure would occur when the energy stored in one of these modes has reached a critical value. However, the logic employed by these authors to formulate a failure criterion based on this theory could not be extended to multiaxial stress. In this thesis, an alternate criterion is presented which redresses the biaxial restriction by reformulating the surfaces of constant modal energy as surfaces of constant eigenstress magnitude. The resulting failure envelope, in a multidimensional stress space, is piecewise smooth. Each facet of the envelope is expected to represent the locus of failure data by a particular Kelvin mode. It is further shown that the Kelvin mode theory alone provides an incomplete description of the failure of some materials, but that this weakness can be addressed by the introduction of a set of complementary modes. A revised theory which combines both Kelvin and complementary modes is thus proposed and applied seven example materials: an isotropic concrete, tetragonal paperboard, two orthotropic softwoods, two orthotropic hardwoods and an orthotropic cortical bone. The resulting failure envelopes for these examples were plotted and, with the exception of concrete, shown to produce intuitively correct failure predictions.
NASA Astrophysics Data System (ADS)
Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.
2017-09-01
This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.
Bilinear effect in complex systems
NASA Astrophysics Data System (ADS)
Lam, Lui; Bellavia, David C.; Han, Xiao-Pu; Alston Liu, Chih-Hui; Shu, Chang-Qing; Wei, Zhengjin; Zhou, Tao; Zhu, Jichen
2010-09-01
The distribution of the lifetime of Chinese dynasties (as well as that of the British Isles and Japan) in a linear Zipf plot is found to consist of two straight lines intersecting at a transition point. This two-section piecewise-linear distribution is different from the power law or the stretched exponent distribution, and is called the Bilinear Effect for short. With assumptions mimicking the organization of ancient Chinese regimes, a 3-layer network model is constructed. Numerical results of this model show the bilinear effect, providing a plausible explanation of the historical data. The bilinear effect in two other social systems is presented, indicating that such a piecewise-linear effect is widespread in social systems.
Limit cycles in piecewise-affine gene network models with multiple interaction loops
NASA Astrophysics Data System (ADS)
Farcot, Etienne; Gouzé, Jean-Luc
2010-01-01
In this article, we consider piecewise affine differential equations modelling gene networks. We work with arbitrary decay rates, and under a local hypothesis expressed as an alignment condition of successive focal points. The interaction graph of the system may be rather complex (multiple intricate loops of any sign, multiple thresholds, etc.). Our main result is an alternative theorem showing that if a sequence of region is periodically visited by trajectories, then under our hypotheses, there exists either a unique stable periodic solution, or the origin attracts all trajectories in this sequence of regions. This result extends greatly our previous work on a single negative feedback loop. We give several examples and simulations illustrating different cases.
An updated Lagrangian discontinuous Galerkin hydrodynamic method for gas dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Tong; Shashkov, Mikhail Jurievich; Morgan, Nathaniel Ray
Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for gas dynamics. The new method evolves conserved unknowns in the current configuration, which obviates the Jacobi matrix that maps the element in a reference coordinate system or the initial coordinate system to the current configuration. The density, momentum, and total energy (ρ, ρu, E) are approximated with conservative higher-order Taylor expansions over the element and are limited toward a piecewise constant field near discontinuities using a limiter. Two new limiting methods are presented for enforcing the bounds on the primitive variables of density, velocity, and specific internal energymore » (ρ, u, e). The nodal velocity, and the corresponding forces, are calculated by solving an approximate Riemann problem at the element nodes. An explicit second-order method is used to temporally advance the solution. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. 1D Cartesian coordinates test problem results are presented to demonstrate the accuracy and convergence order of the new DG method with the new limiters.« less
Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy
2013-11-01
We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.
A Critical Study of Agglomerated Multigrid Methods for Diffusion
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2011-01-01
Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.
A Critical Study of Agglomerated Multigrid Methods for Diffusion
NASA Technical Reports Server (NTRS)
Thomas, James L.; Nishikawa, Hiroaki; Diskin, Boris
2009-01-01
Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and highly stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Actual cycle results are verified using quantitative analysis methods in which parts of the cycle are replaced by their idealized counterparts.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN.
Nguyen, Dang-Manh; Peters, Jörg
2017-01-01
Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP 0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP 0 filters outperform the more complex known boundary filters. NP 0 filters typically reduce the L ∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP 0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute.
EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN*
Nguyen, Dang-Manh; Peters, Jörg
2017-01-01
Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP0 filters outperform the more complex known boundary filters. NP0 filters typically reduce the L∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute. PMID:29081643
Fast mix table construction for material discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. R.
2013-07-01
An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.
2015-01-01
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112
NASA Astrophysics Data System (ADS)
Bürger, Raimund; Kumar, Sarvesh; Ruiz-Baier, Ricardo
2015-10-01
The sedimentation-consolidation and flow processes of a mixture of small particles dispersed in a viscous fluid at low Reynolds numbers can be described by a nonlinear transport equation for the solids concentration coupled with the Stokes problem written in terms of the mixture flow velocity and the pressure field. Here both the viscosity and the forcing term depend on the local solids concentration. A semi-discrete discontinuous finite volume element (DFVE) scheme is proposed for this model. The numerical method is constructed on a baseline finite element family of linear discontinuous elements for the approximation of velocity components and concentration field, whereas the pressure is approximated by piecewise constant elements. The unique solvability of both the nonlinear continuous problem and the semi-discrete DFVE scheme is discussed, and optimal convergence estimates in several spatial norms are derived. Properties of the model and the predicted space accuracy of the proposed formulation are illustrated by detailed numerical examples, including flows under gravity with changing direction, a secondary settling tank in an axisymmetric setting, and batch sedimentation in a tilted cylindrical vessel.
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Harris, Charles E.
1990-01-01
A mathematical model based on the Euler-Bermoulli beam theory is proposed for predicting the effective Young's moduli of piecewise isotropic composite laminates with local ply curvatures in the main load-carrying layers. Strains in corrugated layers, in-phase layers, and out-of-phase layers are predicted for various geometries and material configurations by assuming matrix layers as elastic foundations of different spring constants. The effective Young's moduli measured from corrugated aluminum specimens and aluminum/epoxy specimens with in-phase and out-of-phase wavy patterns coincide very well with the model predictions. Moire fringe analysis of an in-phase specimen and an out-of-phase specimen are also presented, confirming the main assumption of the model related to the elastic constraint due to the matrix layers. The present model is also compared with the experimental results and other models, including the microbuckling models, published in the literature. The results of the present study show that even a very small-scale local ply curvature produces a noticeable effect on the mechanical constitutive behavior of a laminated composite.
Mahmoudzadeh, Batoul; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars
2014-08-01
A model is developed to describe solute transport and retention in fractured rocks. It accounts for advection along the fracture, molecular diffusion from the fracture to the rock matrix composed of several geological layers, adsorption on the fracture surface, adsorption in the rock matrix layers and radioactive decay-chains. The analytical solution, obtained for the Laplace-transformed concentration at the outlet of the flowing channel, can conveniently be transformed back to the time domain by the use of the de Hoog algorithm. This allows one to readily include it into a fracture network model or a channel network model to predict nuclide transport through channels in heterogeneous fractured media consisting of an arbitrary number of rock units with piecewise constant properties. More importantly, the simulations made in this study recommend that it is necessary to account for decay-chains and also rock matrix comprising at least two different geological layers, if justified, in safety and performance assessment of the repositories for spent nuclear fuel. Copyright © 2014 Elsevier B.V. All rights reserved.
An updated Lagrangian discontinuous Galerkin hydrodynamic method for gas dynamics
Wu, Tong; Shashkov, Mikhail Jurievich; Morgan, Nathaniel Ray; ...
2018-04-09
Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for gas dynamics. The new method evolves conserved unknowns in the current configuration, which obviates the Jacobi matrix that maps the element in a reference coordinate system or the initial coordinate system to the current configuration. The density, momentum, and total energy (ρ, ρu, E) are approximated with conservative higher-order Taylor expansions over the element and are limited toward a piecewise constant field near discontinuities using a limiter. Two new limiting methods are presented for enforcing the bounds on the primitive variables of density, velocity, and specific internal energymore » (ρ, u, e). The nodal velocity, and the corresponding forces, are calculated by solving an approximate Riemann problem at the element nodes. An explicit second-order method is used to temporally advance the solution. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. 1D Cartesian coordinates test problem results are presented to demonstrate the accuracy and convergence order of the new DG method with the new limiters.« less
NASA Astrophysics Data System (ADS)
Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian
2015-12-01
Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.
Electromagnetic analysis of arbitrarily shaped pinched carpets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupont, Guillaume; Guenneau, Sebastien; Enoch, Stefan
2010-09-15
We derive the expressions for the anisotropic heterogeneous tensors of permittivity and permeability associated with two-dimensional and three-dimensional carpets of an arbitrary shape. In the former case, we map a segment onto smooth curves whereas in the latter case we map an arbitrary region of the plane onto smooth surfaces. Importantly, these carpets display no singularity of the permeability and permeability tensor components. Moreover, a reduced set of parameters leads to nonmagnetic two-dimensional carpets in p polarization (i.e., for a magnetic field orthogonal to the plane containing the carpet). Such an arbitrarily shaped carpet is shown to work over amore » finite bandwidth when it is approximated by a checkerboard with 190 homogeneous cells of piecewise constant anisotropic permittivity. We finally perform some finite element computations in the full vector three-dimensional case for a plane wave in normal incidence and a Gaussian beam in oblique incidence. The latter requires perfectly matched layers set in a rotated coordinate axis which exemplifies the role played by geometric transforms in computational electromagnetism.« less
SU-F-18C-14: Hessian-Based Norm Penalty for Weighted Least-Square CBCT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, T; Sun, N; Tan, S
Purpose: To develop a Hessian-based norm penalty for cone-beam CT (CBCT) reconstruction that has a similar ability in suppressing noise as the total variation (TV) penalty while avoiding the staircase effect and better preserving low-contrast objects. Methods: We extended the TV penalty to a Hessian-based norm penalty based on the Frobenius norm of the Hessian matrix of an image for CBCT reconstruction. The objective function was constructed using the penalized weighted least-square (PWLS) principle. An effective algorithm was developed to minimize the objective function using a majorization-minimization (MM) approach. We evaluated and compared the proposed penalty with the TV penaltymore » on a CatPhan 600 phantom and an anthropomorphic head phantom, each acquired at a low-dose protocol (10mA/10ms) and a high-dose protocol (80mA/12ms). For both penalties, contrast-to-noise (CNR) in four low-contrast regions-of-interest (ROIs) and the full-width-at-half-maximum (FWHM) of two point-like objects in constructed images were calculated and compared. Results: In the experiment of CatPhan 600 phantom, the Hessian-based norm penalty has slightly higher CNRs and approximately equivalent FWHM values compared with the TV penalty. In the experiment of the anthropomorphic head phantom at the low-dose protocol, the TV penalty result has several artificial piece-wise constant areas known as the staircase effect while in the Hessian-based norm penalty the image appears smoother and more similar to that of the FDK result using the high-dose protocol. Conclusion: The proposed Hessian-based norm penalty has a similar performance in suppressing noise to the TV penalty, but has a potential advantage in suppressing the staircase effect and preserving low-contrast objects. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086.« less
Viscosity stratification and the aspect ratio of convection rolls
NASA Astrophysics Data System (ADS)
Morris, S. J. S.
2005-11-01
To clarify a mechanism by which earth's low--viscosity layer may increase the wavelength of mantle convection cells, we analyse the clockwise isothermal cellular motion driven by a uniform shear stress of magnitude τ applied at each end of a rectangle of height 2D and length L. The viscosity μ is a given piecewise-constant function of depth; within a low--viscosity channel of thickness d located at the top of the layer, μ=mμ1; elsewhere, within the `core', μ=μ1. We show that in the double limit d/D->0, m->0, this two--layer flow is equivalent to one in single layer of viscosity μ1 with a new boundary condition at its top representing the interaction of the channel and core flows. Let x=x*/L, y=y*/D and ψ= μ1ψ*/ τD^2. Then the stream function ψ for the core motion satisfies the b.v.p. ψyyyy+2 2̂ψxxyy+ 4̂ψxxxx=0; at |x|=1 , ψ=0, α^2ψxx=-1; at y=0 , ψ=0=ψyy; at y=1, ψyy- 2̂ψxx=0 , and ψyyy+3 2̂ψyxx= 3ɛψ. Here α=D/L and ɛ=mD^3/d^3. We find that for ɛ->0, the motion has two horizontal scales, namely D and L1= D/&1/2 ̂D. If the rectangle length L˜L1, fluid sinks at one end and rises at the other; those end flows occur on the scale D, and are connected by a long--wave flow on the scale L1. The cellular motion is closed within the low--viscosity layer. We have extended this method to treat convection rolls in a fluid of infinite Prandtl number. Our predicted heat flows agree well with those found in numerical simulations by Lenardic, Richards & Busse et al (2005) (J. Geophys. Res., to appear).
Chaotic dynamics and diffusion in a piecewise linear equation
NASA Astrophysics Data System (ADS)
Shahrear, Pabel; Glass, Leon; Edwards, Rod
2015-03-01
Genetic interactions are often modeled by logical networks in which time is discrete and all gene activity states update simultaneously. However, there is no synchronizing clock in organisms. An alternative model assumes that the logical network is preserved and plays a key role in driving the dynamics in piecewise nonlinear differential equations. We examine dynamics in a particular 4-dimensional equation of this class. In the equation, two of the variables form a negative feedback loop that drives a second negative feedback loop. By modifying the original equations by eliminating exponential decay, we generate a modified system that is amenable to detailed analysis. In the modified system, we can determine in detail the Poincaré (return) map on a cross section to the flow. By analyzing the eigenvalues of the map for the different trajectories, we are able to show that except for a set of measure 0, the flow must necessarily have an eigenvalue greater than 1 and hence there is sensitive dependence on initial conditions. Further, there is an irregular oscillation whose amplitude is described by a diffusive process that is well-modeled by the Irwin-Hall distribution. There is a large class of other piecewise-linear networks that might be analyzed using similar methods. The analysis gives insight into possible origins of chaotic dynamics in periodically forced dynamical systems.
Apparent multifractality of self-similar Lévy processes
NASA Astrophysics Data System (ADS)
Zamparo, Marco
2017-07-01
Scaling properties of time series are usually studied in terms of the scaling laws of empirical moments, which are the time average estimates of moments of the dynamic variable. Nonlinearities in the scaling function of empirical moments are generally regarded as a sign of multifractality in the data. We show that, except for the Brownian motion, this method fails to disclose the correct monofractal nature of self-similar Lévy processes. We prove that for this class of processes it produces apparent multifractality characterised by a piecewise-linear scaling function with two different regimes, which match at the stability index of the considered process. This result is motivated by previous numerical evidence. It is obtained by introducing an appropriate stochastic normalisation which is able to cure empirical moments, without hiding their dependence on time, when moments they aim at estimating do not exist.
Hardware Neural Network for a Visual Inspection System
NASA Astrophysics Data System (ADS)
Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji
The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.
Segmentation of discrete vector fields.
Li, Hongyu; Chen, Wenbin; Shen, I-Fan
2006-01-01
In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
NASA Astrophysics Data System (ADS)
Bremer, James
2018-05-01
We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.
Focusing of concentric piecewise vector Bessel-Gaussian beam
NASA Astrophysics Data System (ADS)
Li, Jinsong; Fang, Ying; Zhou, Shenghua; Ye, Youxiang
2010-12-01
The focusing properties of a concentric piecewise vector Bessel-Gaussian beam are investigated in this paper. The beam consists of three portions: the center circular portion and outer annular portion are radially polarized, while the inner annular portion is generalized polarized with tunable polarized angle. Numerical simulations show that the evolution of focal pattern is altered considerably with different Bessel parameters in the Bessel term of the vector Bessel-Gaussian beam. The polarized angle also affects the focal pattern remarkably. Some interesting focal patterns may appear, such as two-peak, dark hollow focus; ring focus; spherical shell focus; cylindrical shell focus; and multi-ring-peak focus, and transverse focal switch occurs with increasing polarized angle of the inner annular portion, which may be used in optical manipulation.
A Dynamical Analysis of a Piecewise Smooth Pest Control SI Model
NASA Astrophysics Data System (ADS)
Liu, Bing; Liu, Wanbo; Tao, Fennmei; Kang, Baolin; Cong, Jiguang
In this paper, we propose a piecewise smooth SI pest control system to model the process of spraying pesticides and releasing infectious pests. We assume that the pest population consists of susceptible pests and infectious pests, and that the disease spreads horizontally between pests. We take the susceptible pest as the control index on whether to implement chemical control and biological control strategies. Based on the theory of Filippov system, the sliding-mode domain and conditions for the existence of real equilibria, virtual equilibria, pseudo-equilibrium and boundary equilibria are given. Further, we show the global stability of real equilibria (or boundary equilibria) and pseudo-equilibrium. Our results can provide theoretical guidance for the problem of pest control.
OpenMEEG: opensource software for quasistatic bioelectromagnetics.
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2010-09-06
Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.
OpenMEEG: opensource software for quasistatic bioelectromagnetics
2010-01-01
Background Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. Methods We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. Results We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. Conclusions This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort. PMID:20819204
Dust motions in quasi-statically charged binary asteroid systems
NASA Astrophysics Data System (ADS)
Maruskin, Jared M.; Bellerose, Julie; Wong, Macken; Mitchell, Lara; Richardson, David; Mathews, Douglas; Nguyen, Tri; Ganeshalingam, Usha; Ma, Gina
2013-03-01
In this paper, we discuss dust motion and investigate possible mass transfer of charged particles in a binary asteroid system, in which the asteroids are electrically charged due to solar radiation. The surface potential of the asteroids is assumed to be a piecewise function, with positive potential on the sunlit half and negative potential on the shadow half. We derive the nonautonomous equations of motion for charged particles and an analytic representation for their lofting conditions. Particle trajectories and temporary relative equilibria are examined in relation to their moving forbidden regions, a concept we define and discuss. Finally, we use a Monte Carlo simulation for a case study on mass transfer and loss rates between the asteroids.
NASA Astrophysics Data System (ADS)
Li, Chuan-Yao; Huang, Hai-Jun; Tang, Tie-Qiao
2017-03-01
This paper investigates the traffic flow dynamics under the social optimum (SO) principle in a single-entry traffic corridor with staggered shifts from the analytical and numerical perspectives. The LWR (Lighthill-Whitham and Richards) model and the Greenshield's velocity-density function are utilized to describe the dynamic properties of traffic flow. The closed-form SO solution is analytically derived and some numerical examples are used to further testify the analytical solution. The optimum proportion of the numbers of commuters with different desired arrival times is further discussed, where the analytical and numerical results both indicate that the cumulative outflow curve under the SO principle is piecewise smooth.
SU-F-T-335: Piecewise Uniform Dose Prescription and Optimization Based On PET/CT Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, G; Liu, J
Purpose: In intensity modulated radiation therapy (IMRT), the tumor target volume is given a uniform dose prescription, which does not consider the heterogeneous characteristics of tumor such as hypoxia, clonogen density, radiosensitivity, tumor proliferation rate and so on. Our goal is to develop a nonuniform target dose prescription method which can spare organs at risk (OARs) better and does not decrease the tumor control probability (TCP). Methods: We propose a piecewise uniform dose prescription (PUDP) based on PET/CT images of tumor. First, we propose to delineate biological target volumes (BTV) and sub-biological target volumes (sub-BTVs) by our Hierarchical Mumford-Shah Vectormore » Model based on PET/CT images of tumor. Then, in order to spare OARs better, we make the BTV mean dose minimized while restrict the TCP to a constant. So, we can get a general formula for determining an optimal dose prescription based on a linearquadratic model (LQ). However, this dose prescription is high heterogeneous, it is very difficult to deliver by IMRT. Therefore we propose to use the equivalent uniform dose (EUD) in each sub-BTV as its final dose prescription, which makes a PUDP for the BTV. Results: We have evaluated the IMRT planning of a patient with nasopharyngeal carcinoma respectively using PUDP and UDP. The results show that the highest and mean doses inside brain stem are 48.425Gy and 19.151Gy respectively when the PUDP is used for IMRT planning, while they are 52.975Gy and 20.0776Gy respectively when the UDP is used. Both of the resulting TCPs(0.9245, 0.9674) are higher than the theoretical TCP(0.8739), when 70Gy is delivered to the BTV. Conclusion: Comparing with the UDP, the PUDP can spare the OARs better while the resulting TCP by PUDP is not significantly lower than by UDP. This work was supported in part by National Natural Science Foundation of China undergrant no.61271382 and by the foundation for construction of scientific project platform forthe cancer hospital of Hunan province.« less
Effect of long-term antibiotic use on weight in adolescents with acne.
Contopoulos-Ioannidis, Despina G; Ley, Catherine; Wang, Wei; Ma, Ting; Olson, Clifford; Shi, Xiaoli; Luft, Harold S; Hastie, Trevor; Parsonnet, Julie
2016-04-01
Antibiotics increase weight in farm animals and may cause weight gain in humans. We used electronic health records from a large primary care organization to determine the effect of antibiotics on weight and BMI in healthy adolescents with acne. We performed a retrospective cohort study of adolescents with acne prescribed ≥4 weeks of oral antibiotics with weight measurements within 18 months pre-antibiotics and 12 months post-antibiotics. We compared within-individual changes in weight-for-age Z-scores (WAZs) and BMI-for-age Z-scores (BMIZs). We used: (i) paired t-tests to analyse changes between the last pre-antibiotics versus the first post-antibiotic measurements; (ii) piecewise-constant-mixed models to capture changes between mean measurements pre- versus post-antibiotics; (iii) piecewise-linear-mixed models to capture changes in trajectory slopes pre- versus post-antibiotics; and (iv) χ(2) tests to compare proportions of adolescents with ≥0.2 Z-scores WAZ or BMIZ increase or decrease. Our cohort included 1012 adolescents with WAZs; 542 also had BMIZs. WAZs decreased post-antibiotics in all analyses [change between last WAZ pre-antibiotics versus first WAZ post-antibiotics = -0.041 Z-scores (P < 0.001); change between mean WAZ pre- versus post-antibiotics = -0.050 Z-scores (P < 0.001); change in WAZ trajectory slopes pre- versus post-antibiotics = -0.025 Z-scores/6 months (P = 0.002)]. More adolescents had a WAZ decrease post-antibiotics ≥0.2 Z-scores than an increase (26% versus 18%; P < 0.001). Trends were similar, though not statistically significant, for BMIZ changes. Contrary to original expectations, long-term antibiotic use in healthy adolescents with acne was not associated with weight gain. This finding, which was consistent across all analyses, does not support a weight-promoting effect of antibiotics in adolescents. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Dimers in Piecewise Temperleyan Domains
NASA Astrophysics Data System (ADS)
Russkikh, Marianna
2018-03-01
We study the large-scale behavior of the height function in the dimer model on the square lattice. Richard Kenyon has shown that the fluctuations of the height function on Temperleyan discretizations of a planar domain converge in the scaling limit (as the mesh size tends to zero) to the Gaussian Free Field with Dirichlet boundary conditions. We extend Kenyon's result to a more general class of discretizations. Moreover, we introduce a new factorization of the coupling function of the double-dimer model into two discrete holomorphic functions, which are similar to discrete fermions defined in Smirnov (Proceedings of the international congress of mathematicians (ICM), Madrid, Spain, 2006; Ann Math (2) 172:1435-1467, 2010). For Temperleyan discretizations with appropriate boundary modifications, the results of Kenyon imply that the expectation of the double-dimer height function converges to a harmonic function in the scaling limit. We use the above factorization to extend this result to the class of all polygonal discretizations, that are not necessarily Temperleyan. Furthermore, we show that, quite surprisingly, the expectation of the double-dimer height function in the Temperleyan case is exactly discrete harmonic (for an appropriate choice of Laplacian) even before taking the scaling limit.
Discontinuous dual-primal mixed finite elements for elliptic problems
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Micheletti, Stefano; Sacco, Riccardo
2000-01-01
We propose a novel discontinuous mixed finite element formulation for the solution of second-order elliptic problems. Fully discontinuous piecewise polynomial finite element spaces are used for the trial and test functions. The discontinuous nature of the test functions at the element interfaces allows to introduce new boundary unknowns that, on the one hand enforce the weak continuity of the trial functions, and on the other avoid the need to define a priori algorithmic fluxes as in standard discontinuous Galerkin methods. Static condensation is performed at the element level, leading to a solution procedure based on the sole interface unknowns. The resulting family of discontinuous dual-primal mixed finite element methods is presented in the one and two-dimensional cases. In the one-dimensional case, we show the equivalence of the method with implicit Runge-Kutta schemes of the collocation type exhibiting optimal behavior. Numerical experiments in one and two dimensions demonstrate the order accuracy of the new method, confirming the results of the analysis.
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, marco
2010-01-01
The Refined Zigzag Theory (RZT) for homogeneous, laminated composite, and sandwich plates is presented from a multi-scale formalism starting with the inplane displacement field expressed as a superposition of coarse and fine contributions. The coarse kinematic field is that of first-order shear-deformation theory, whereas the fine kinematic field has a piecewise-linear zigzag distribution through the thickness. The condition of limiting homogeneity of transverse-shear properties is proposed and yields four distinct sets of zigzag functions. By examining elastostatic solutions for highly heterogeneous sandwich plates, the best-performing zigzag functions are identified. The RZT predictive capabilities to model homogeneous and highly heterogeneous sandwich plates are critically assessed, demonstrating its superior efficiency, accuracy ; and a wide range of applicability. The present theory, which is derived from the virtual work principle, is well-suited for developing computationally efficient CO-continuous finite elements, and is thus appropriate for the analysis and design of high-performance load-bearing aerospace structures.
Visibility graphs and symbolic dynamics
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Just, Wolfram
2018-07-01
Visibility algorithms are a family of geometric and ordering criteria by which a real-valued time series of N data is mapped into a graph of N nodes. This graph has been shown to often inherit in its topology nontrivial properties of the series structure, and can thus be seen as a combinatorial representation of a dynamical system. Here we explore in some detail the relation between visibility graphs and symbolic dynamics. To do that, we consider the degree sequence of horizontal visibility graphs generated by the one-parameter logistic map, for a range of values of the parameter for which the map shows chaotic behaviour. Numerically, we observe that in the chaotic region the block entropies of these sequences systematically converge to the Lyapunov exponent of the time series. Hence, Pesin's identity suggests that these block entropies are converging to the Kolmogorov-Sinai entropy of the physical measure, which ultimately suggests that the algorithm is implicitly and adaptively constructing phase space partitions which might have the generating property. To give analytical insight, we explore the relation k(x) , x ∈ [ 0 , 1 ] that, for a given datum with value x, assigns in graph space a node with degree k. In the case of the out-degree sequence, such relation is indeed a piece-wise constant function. By making use of explicit methods and tools from symbolic dynamics we are able to analytically show that the algorithm indeed performs an effective partition of the phase space and that such partition is naturally expressed as a countable union of subintervals, where the endpoints of each subinterval are related to the fixed point structure of the iterates of the map and the subinterval enumeration is associated with particular ordering structures that we called motifs.
Second order Method for Solving 3D Elasticity Equations with Complex Interfaces
Wang, Bao; Xia, Kelin; Wei, Guo-Wei
2015-01-01
Elastic materials are ubiquitous in nature and indispensable components in man-made devices and equipments. When a device or equipment involves composite or multiple elastic materials, elasticity interface problems come into play. The solution of three dimensional (3D) elasticity interface problems is significantly more difficult than that of elliptic counterparts due to the coupled vector components and cross derivatives in the governing elasticity equation. This work introduces the matched interface and boundary (MIB) method for solving 3D elasticity interface problems. The proposed MIB elasticity interface scheme utilizes fictitious values on irregular grid points near the material interface to replace function values in the discretization so that the elasticity equation can be discretized using the standard finite difference schemes as if there were no material interface. The interface jump conditions are rigorously enforced on the intersecting points between the interface and the mesh lines. Such an enforcement determines the fictitious values. A number of new techniques has been developed to construct efficient MIB elasticity interface schemes for dealing with cross derivative in coupled governing equations. The proposed method is extensively validated over both weak and strong discontinuity of the solution, both piecewise constant and position-dependent material parameters, both smooth and nonsmooth interface geometries, and both small and large contrasts in the Poisson’s ratio and shear modulus across the interface. Numerical experiments indicate that the present MIB method is of second order convergence in both L∞ and L2 error norms for handling arbitrarily complex interfaces, including biomolecular surfaces. To our best knowledge, this is the first elasticity interface method that is able to deliver the second convergence for the molecular surfaces of proteins.. PMID:25914422
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Quantum temporal probabilities in tunneling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastopoulos, Charis, E-mail: anastop@physics.upatras.gr; Savvidou, Ntina, E-mail: ksavvidou@physics.upatras.gr
We study the temporal aspects of quantum tunneling as manifested in time-of-arrival experiments in which the detected particle tunnels through a potential barrier. In particular, we present a general method for constructing temporal probabilities in tunneling systems that (i) defines ‘classical’ time observables for quantum systems and (ii) applies to relativistic particles interacting through quantum fields. We show that the relevant probabilities are defined in terms of specific correlation functions of the quantum field associated with tunneling particles. We construct a probability distribution with respect to the time of particle detection that contains all information about the temporal aspects ofmore » the tunneling process. In specific cases, this probability distribution leads to the definition of a delay time that, for parity-symmetric potentials, reduces to the phase time of Bohm and Wigner. We apply our results to piecewise constant potentials, by deriving the appropriate junction conditions on the points of discontinuity. For the double square potential, in particular, we demonstrate the existence of (at least) two physically relevant time parameters, the delay time and a decay rate that describes the escape of particles trapped in the inter-barrier region. Finally, we propose a resolution to the paradox of apparent superluminal velocities for tunneling particles. We demonstrate that the idea of faster-than-light speeds in tunneling follows from an inadmissible use of classical reasoning in the description of quantum systems. -- Highlights: •Present a general methodology for deriving temporal probabilities in tunneling systems. •Treatment applies to relativistic particles interacting through quantum fields. •Derive a new expression for tunneling time. •Identify new time parameters relevant to tunneling. •Propose a resolution of the superluminality paradox in tunneling.« less
INTEGRAL/SPI data segmentation to retrieve source intensity variations
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P. R.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-07-01
Context. The INTEGRAL/SPI, X/γ-ray spectrometer (20 keV-8 MeV) is an instrument for which recovering source intensity variations is not straightforward and can constitute a difficulty for data analysis. In most cases, determining the source intensity changes between exposures is largely based on a priori information. Aims: We propose techniques that help to overcome the difficulty related to source intensity variations, which make this step more rational. In addition, the constructed "synthetic" light curves should permit us to obtain a sky model that describes the data better and optimizes the source signal-to-noise ratios. Methods: For this purpose, the time intensity variation of each source was modeled as a combination of piecewise segments of time during which a given source exhibits a constant intensity. To optimize the signal-to-noise ratios, the number of segments was minimized. We present a first method that takes advantage of previous time series that can be obtained from another instrument on-board the INTEGRAL observatory. A data segmentation algorithm was then used to synthesize the time series into segments. The second method no longer needs external light curves, but solely SPI raw data. For this, we developed a specific algorithm that involves the SPI transfer function. Results: The time segmentation algorithms that were developed solve a difficulty inherent to the SPI instrument, which is the intensity variations of sources between exposures, and it allows us to obtain more information about the sources' behavior. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA.
Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.
Raposo, Carolina; Antunes, Michel; P Barreto, Joao
2017-08-09
The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.
Lindsey, J C; Ryan, L M
1994-01-01
The three-state illness-death model provides a useful way to characterize data from a rodent tumorigenicity experiment. Most parametrizations proposed recently in the literature assume discrete time for the death process and either discrete or continuous time for the tumor onset process. We compare these approaches with a third alternative that uses a piecewise continuous model on the hazards for tumor onset and death. All three models assume proportional hazards to characterize tumor lethality and the effect of dose on tumor onset and death rate. All of the models can easily be fitted using an Expectation Maximization (EM) algorithm. The piecewise continuous model is particularly appealing in this context because the complete data likelihood corresponds to a standard piecewise exponential model with tumor presence as a time-varying covariate. It can be shown analytically that differences between the parameter estimates given by each model are explained by varying assumptions about when tumor onsets, deaths, and sacrifices occur within intervals. The mixed-time model is seen to be an extension of the grouped data proportional hazards model [Mutat. Res. 24:267-278 (1981)]. We argue that the continuous-time model is preferable to the discrete- and mixed-time models because it gives reasonable estimates with relatively few intervals while still making full use of the available information. Data from the ED01 experiment illustrate the results. PMID:8187731