Piecewise polynomial representations of genomic tracks.
Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz
2012-01-01
Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.
2008-06-01
Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Weak-noise limit of a piecewise-smooth stochastic differential equation.
Chen, Yaming; Baule, Adrian; Touchette, Hugo; Just, Wolfram
2013-11-01
We investigate the validity and accuracy of weak-noise (saddle-point or instanton) approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example a piecewise-constant SDE, which serves as a simple model of Brownian motion with solid friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided the singularity of the path integral associated with the nonsmooth SDE is treated with some heuristics. We also show that, as in the case of smooth SDEs, the deterministic paths of the noiseless system correctly describe the behavior of the nonsmooth SDE in the low-noise limit. Finally, we consider a smooth regularization of the piecewise-constant SDE and study to what extent this regularization can rectify some of the problems encountered when dealing with discontinuous drifts and singularities in SDEs.
Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
Wu, Ailong; Liu, Ling; Huang, Tingwen; Zeng, Zhigang
2017-01-01
Neurodynamic system is an emerging research field. To understand the essential motivational representations of neural activity, neurodynamics is an important question in cognitive system research. This paper is to investigate Mittag-Leffler stability of a class of fractional-order neural networks in the presence of generalized piecewise constant arguments. To identify neural types of computational principles in mathematical and computational analysis, the existence and uniqueness of the solution of neurodynamic system is the first prerequisite. We prove that the existence and uniqueness of the solution of the network holds when some conditions are satisfied. In addition, self-active neurodynamic system demands stable internal dynamical states (equilibria). The main emphasis will be then on several sufficient conditions to guarantee a unique equilibrium point. Furthermore, to provide deeper explanations of neurodynamic process, Mittag-Leffler stability is studied in detail. The established results are based on the theories of fractional differential equation and differential equation with generalized piecewise constant arguments. The derived criteria improve and extend the existing related results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Beretta, Elena; Micheletti, Stefano; Perotto, Simona; Santacesaria, Matteo
2018-01-01
In this paper, we develop a shape optimization-based algorithm for the electrical impedance tomography (EIT) problem of determining a piecewise constant conductivity on a polygonal partition from boundary measurements. The key tool is to use a distributed shape derivative of a suitable cost functional with respect to movements of the partition. Numerical simulations showing the robustness and accuracy of the method are presented for simulated test cases in two dimensions.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients
Kolev, Tzanio V.; Xu, Jinchao; Zhu, Yunrong
2015-08-23
In this study, we extend some of the multilevel convergence results obtained by Xu and Zhu, to the case of second order linear reaction-diffusion equations. Specifically, we consider the multilevel preconditioners for solving the linear systems arising from the linear finite element approximation of the problem, where both diffusion and reaction coefficients are piecewise-constant functions. We discuss in detail the influence of both the discontinuous reaction and diffusion coefficients to the performance of the classical BPX and multigrid V-cycle preconditioner.
Mixed Legendre moments and discrete scattering cross sections for anisotropy representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calloo, A.; Vidal, J. F.; Le Tellier, R.
2012-07-01
This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, whichmore » better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)« less
NASA Astrophysics Data System (ADS)
Bauer, Werner; Behrens, Jörn
2017-04-01
We present a locally conservative, low-order finite element (FE) discretization of the covariant 1D linear shallow-water equations written in split form (cf. tet{[1]}). The introduction of additional differential forms (DF) that build pairs with the original ones permits a splitting of these equations into topological momentum and continuity equations and metric-dependent closure equations that apply the Hodge-star. Our novel discretization framework conserves this geometrical structure, in particular it provides for all DFs proper FE spaces such that the differential operators (here gradient and divergence) hold in strong form. The discrete topological equations simply follow by trivial projections onto piecewise constant FE spaces without need to partially integrate. The discrete Hodge-stars operators, representing the discretized metric equations, are realized by nontrivial Galerkin projections (GP). Here they follow by projections onto either a piecewise constant (GP0) or a piecewise linear (GP1) space. Our framework thus provides essentially three different schemes with significantly different behavior. The split scheme using twice GP1 is unstable and shares the same discrete dispersion relation and similar second-order convergence rates as the conventional P1-P1 FE scheme that approximates both velocity and height variables by piecewise linear spaces. The split scheme that applies both GP1 and GP0 is stable and shares the dispersion relation of the conventional P1-P0 FE scheme that approximates the velocity by a piecewise linear and the height by a piecewise constant space with corresponding second- and first-order convergence rates. Exhibiting for both velocity and height fields second-order convergence rates, we might consider the split GP1-GP0 scheme though as stable versions of the conventional P1-P1 FE scheme. For the split scheme applying twice GP0, we are not aware of a corresponding conventional formulation to compare with. Though exhibiting larger absolute error values, it shows similar convergence rates as the other split schemes, but does not provide a satisfactory approximation of the dispersion relation as short waves are propagated much to fast. Despite this, the finding of this new scheme illustrates the potential of our discretization framework as a toolbox to find and to study new FE schemes based on new combinations of FE spaces. [1] Bauer, W. [2016], A new hierarchically-structured n-dimensional covariant form of rotating equations of geophysical fluid dynamics, GEM - International Journal on Geomathematics, 7(1), 31-101.
NASA Astrophysics Data System (ADS)
Zhang, Zhengfang; Chen, Weifeng
2018-05-01
Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
Interface with weakly singular points always scatter
NASA Astrophysics Data System (ADS)
Li, Long; Hu, Guanghui; Yang, Jiansheng
2018-07-01
Assume that a bounded scatterer is embedded into an infinite homogeneous isotropic background medium in two dimensions. The refractive index function is supposed to be piecewise constant. If the scattering interface contains a weakly singular point, we prove that the scattered field cannot vanish identically. This implies the absence of non-scattering energies for piecewise analytic interfaces with one singular point. Local uniqueness is obtained for shape identification problems in inverse medium scattering with a single far-field pattern.
Supplemental Analysis on Compressed Sensing Based Interior Tomography
Yu, Hengyong; Yang, Jiansheng; Jiang, Ming; Wang, Ge
2010-01-01
Recently, in the compressed sensing framework we proved that an interior ROI can be exactly reconstructed via the total variation minimization if the ROI is piecewise constant. In the proofs, we implicitly utilized the property that if an artifact image assumes a constant value within the ROI then this constant must be zero. Here we prove this property in the space of square integrable functions. PMID:19717891
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
NASA Astrophysics Data System (ADS)
Orozco Cortés, Luis Fernando; Fernández García, Nicolás
2014-05-01
A method to obtain the general solution of any constant piecewise potential is presented, this is achieved by means of the analysis of the transfer matrices in each cutoff. The resonance phenomenon together with the supersymmetric quantum mechanics technique allow us to construct a wide family of complex potentials which can be used as theoretical models for optical systems. The method is applied to the particular case for which the potential function has six cutoff points.
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
NASA Astrophysics Data System (ADS)
Nakae, T.; Ryu, T.; Matsuzaki, K.; Rosbi, S.; Sueoka, A.; Takikawa, Y.; Ooi, Y.
2016-09-01
In the torque converter, the damper of the lock-up clutch is used to effectively absorb the torsional vibration. The damper is designed using a piecewise-linear spring with three stiffness stages. However, a nonlinear vibration, referred to as a subharmonic vibration of order 1/2, occurred around the switching point in the piecewise-linear restoring torque characteristics because of the nonlinearity. In the present study, we analyze vibration reduction for subharmonic vibration. The model used herein includes the torque converter, the gear train, and the differential gear. The damper is modeled by a nonlinear rotational spring of the piecewise-linear spring. We focus on the optimum design of the spring characteristics of the damper in order to suppress the subharmonic vibration. A piecewise-linear spring with five stiffness stages is proposed, and the effect of the distance between switching points on the subharmonic vibration is investigated. The results of our analysis indicate that the subharmonic vibration can be suppressed by designing a damper with five stiffness stages to have a small spring constant ratio between the neighboring springs. The distances between switching points must be designed to be large enough that the amplitude of the main frequency component of the systems does not reach the neighboring switching point.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
NASA Astrophysics Data System (ADS)
Adrian, S. B.; Andriulli, F. P.; Eibert, T. F.
2017-02-01
A new hierarchical basis preconditioner for the electric field integral equation (EFIE) operator is introduced. In contrast to existing hierarchical basis preconditioners, it works on arbitrary meshes and preconditions both the vector and the scalar potential within the EFIE operator. This is obtained by taking into account that the vector and the scalar potential discretized with loop-star basis functions are related to the hypersingular and the single layer operator (i.e., the well known integral operators from acoustics). For the single layer operator discretized with piecewise constant functions, a hierarchical preconditioner can easily be constructed. Thus the strategy we propose in this work for preconditioning the EFIE is the transformation of the scalar and the vector potential into operators equivalent to the single layer operator and to its inverse. More specifically, when the scalar potential is discretized with star functions as source and testing functions, the resulting matrix is a single layer operator discretized with piecewise constant functions and multiplied left and right with two additional graph Laplacian matrices. By inverting these graph Laplacian matrices, the discretized single layer operator is obtained, which can be preconditioned with the hierarchical basis. Dually, when the vector potential is discretized with loop functions, the resulting matrix can be interpreted as a hypersingular operator discretized with piecewise linear functions. By leveraging on a scalar Calderón identity, we can interpret this operator as spectrally equivalent to the inverse single layer operator. Then we use a linear-in-complexity, closed-form inverse of the dual hierarchical basis to precondition the hypersingular operator. The numerical results show the effectiveness of the proposed preconditioner and the practical impact of theoretical developments in real case scenarios.
NASA Astrophysics Data System (ADS)
Liang, Feng; Wang, Dechang
In this paper, we suppose that a planar piecewise Hamiltonian system, with a straight line of separation, has a piecewise generalized homoclinic loop passing through a Saddle-Fold point, and assume that there exists a family of piecewise smooth periodic orbits near the loop. By studying the asymptotic expansion of the first order Melnikov function corresponding to the period annulus, we obtain the formulas of the first six coefficients in the expansion, based on which, we provide a lower bound for the maximal number of limit cycles bifurcated from the period annulus. As applications, two concrete systems are considered. Especially, the first one reveals that a quadratic piecewise Hamiltonian system can have five limit cycles near a generalized homoclinic loop under a quadratic piecewise smooth perturbation. Compared with the smooth case [Horozov & Iliev, 1994; Han et al., 1999], three more limit cycles are found.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2015-01-01
Variable-Domain Displacement Transfer Functions were formulated for shape predictions of complex wing structures, for which surface strain-sensing stations must be properly distributed to avoid jointed junctures, and must be increased in the high strain gradient region. Each embedded beam (depth-wise cross section of structure along a surface strain-sensing line) was discretized into small variable domains. Thus, the surface strain distribution can be described with a piecewise linear or a piecewise nonlinear function. Through discretization, the embedded beam curvature equation can be piece-wisely integrated to obtain the Variable-Domain Displacement Transfer Functions (for each embedded beam), which are expressed in terms of geometrical parameters of the embedded beam and the surface strains along the strain-sensing line. By inputting the surface strain data into the Displacement Transfer Functions, slopes and deflections along each embedded beam can be calculated for mapping out overall structural deformed shapes. A long tapered cantilever tubular beam was chosen for shape prediction analysis. The input surface strains were analytically generated from finite-element analysis. The shape prediction accuracies of the Variable- Domain Displacement Transfer Functions were then determined in light of the finite-element generated slopes and deflections, and were fofound to be comparable to the accuracies of the constant-domain Displacement Transfer Functions
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
First and second order derivatives for optimizing parallel RF excitation waveforms.
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations. Copyright © 2015 Elsevier Inc. All rights reserved.
First and second order derivatives for optimizing parallel RF excitation waveforms
NASA Astrophysics Data System (ADS)
Majewski, Kurt; Ritter, Dieter
2015-09-01
For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations.
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-28
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir
2017-01-01
In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566
Controllability of semi-infinite rod heating by a point source
NASA Astrophysics Data System (ADS)
Khurshudyan, A.
2018-04-01
The possibility of control over heating of a semi-infinite thin rod by a point source concentrated at an inner point of the rod, is studied. Quadratic and piecewise constant solutions of the problem are derived, and the possibilities of solving appropriate problems of optimal control are indicated. Determining of the parameters of the piecewise constant solution is reduced to a problem of nonlinear programming. Numerical examples are considered.
Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2004-01-01
We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.
Limit cycles in planar piecewise linear differential systems with nonregular separation line
NASA Astrophysics Data System (ADS)
Cardin, Pedro Toniol; Torregrosa, Joan
2016-12-01
In this paper we deal with planar piecewise linear differential systems defined in two zones. We consider the case when the two linear zones are angular sectors of angles α and 2 π - α, respectively, for α ∈(0 , π) . We study the problem of determining lower bounds for the number of isolated periodic orbits in such systems using Melnikov functions. These limit cycles appear studying higher order piecewise linear perturbations of a linear center. It is proved that the maximum number of limit cycles that can appear up to a sixth order perturbation is five. Moreover, for these values of α, we prove the existence of systems with four limit cycles up to fifth order and, for α = π / 2, we provide an explicit example with five up to sixth order. In general, the nonregular separation line increases the number of periodic orbits in comparison with the case where the two zones are separated by a straight line.
Unified halo-independent formalism from convex hulls for direct dark matter searches
NASA Astrophysics Data System (ADS)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2017-12-01
Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina
2017-12-01
We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis
Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel
2013-01-01
This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007
Filter-based multiscale entropy analysis of complex physiological time series.
Xu, Yuesheng; Zhao, Liang
2013-08-01
Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044
Bardhan, Jaydeep P; Jungwirth, Pavel; Makowski, Lee
2012-09-28
Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular "linear response" model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution).
Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee
2012-01-01
Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318
NASA Astrophysics Data System (ADS)
Hadida, Jonathan; Desrosiers, Christian; Duong, Luc
2011-03-01
The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.
Comparison between PVI2D and Abreu–Johnson’s Model for Petroleum Vapor Intrusion Assessment
Yao, Yijun; Wang, Yue; Verginelli, Iason; Suuberg, Eric M.; Ye, Jianfeng
2018-01-01
Recently, we have developed a two-dimensional analytical petroleum vapor intrusion model, PVI2D (petroleum vapor intrusion, two-dimensional), which can help users to easily visualize soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, reaction rate constant, soil characteristics, and building features. In this study, we made a full comparison of the results returned by PVI2D and those obtained using Abreu and Johnson’s three-dimensional numerical model (AJM). These comparisons, examined as a function of the source strength, source depth, and reaction rate constant, show that PVI2D can provide similar soil gas concentration profiles and source-to-indoor air attenuation factors (within one order of magnitude difference) as those by the AJM. The differences between the two models can be ascribed to some simplifying assumptions used in PVI2D and to some numerical limitations of the AJM in simulating strictly piecewise aerobic biodegradation and no-flux boundary conditions. Overall, the obtained results show that for cases involving homogenous source and soil, PVI2D can represent a valid alternative to more rigorous three-dimensional numerical models. PMID:29398981
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)
NASA Astrophysics Data System (ADS)
Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya
2017-08-01
Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
On piecewise interpolation techniques for estimating solar radiation missing values in Kedah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
2014-12-04
This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
NASA Astrophysics Data System (ADS)
Huang, Qing-Guo; Li, Miao; Li, Xiao-Dong; Wang, Shuang
2009-10-01
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant Λ, where the equation of state (EOS) w and the energy density ρΛ of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant ρΛ in each bin, respectively. It is found that for fitting the Constitution set alone, w and ρΛ will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which ρΛ rapidly decreases at redshift z˜0.331 presents a significant improvement (Δχ2=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant Λ at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant ρΛ model always performs better than a piecewise constant w model; this shows the advantage of using ρΛ, instead of w, to probe the variation of DE.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Qingguo; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190; Li Miao
2009-10-15
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant {lambda}, where the equation of state (EOS) w and the energy density {rho}{sub {lambda}} of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant {rho}{sub {lambda}} in each bin,more » respectively. It is found that for fitting the Constitution set alone, w and {rho}{sub {lambda}} will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which {rho}{sub {lambda}} rapidly decreases at redshift z{approx}0.331 presents a significant improvement ({delta}{chi}{sup 2}=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant {lambda} at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant {rho}{sub {lambda}} model always performs better than a piecewise constant w model; this shows the advantage of using {rho}{sub {lambda}}, instead of w, to probe the variation of DE.« less
Time-temperature effect in adhesively bonded joints
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1981-01-01
The viscoelastic analysis of an adhesively bonded lap joint was reconsidered. The adherends are approximated by essentially Reissner plates and the adhesive is linearly viscoelastic. The hereditary integrals are used to model the adhesive. A linear integral differential equations system for the shear and the tensile stress in the adhesive is applied. The equations have constant coefficients and are solved by using Laplace transforms. It is shown that if the temperature variation in time can be approximated by a piecewise constant function, then the method of Laplace transforms can be used to solve the problem. A numerical example is given for a single lap joint under various loading conditions.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
In the formulations of earlier Displacement Transfer Functions for structure shape predictions, the surface strain distributions, along a strain-sensing line, were represented with piecewise linear functions. To improve the shape-prediction accuracies, Improved Displacement Transfer Functions were formulated using piecewise nonlinear strain representations. Through discretization of an embedded beam (depth-wise cross section of a structure along a strain-sensing line) into multiple small domains, piecewise nonlinear functions were used to describe the surface strain distributions along the discretized embedded beam. Such piecewise approach enabled the piecewise integrations of the embedded beam curvature equations to yield slope and deflection equations in recursive forms. The resulting Improved Displacement Transfer Functions, written in summation forms, were expressed in terms of beam geometrical parameters and surface strains along the strain-sensing line. By feeding the surface strains into the Improved Displacement Transfer Functions, structural deflections could be calculated at multiple points for mapping out the overall structural deformed shapes for visual display. The shape-prediction accuracies of the Improved Displacement Transfer Functions were then examined in view of finite-element-calculated deflections using different tapered cantilever tubular beams. It was found that by using the piecewise nonlinear strain representations, the shape-prediction accuracies could be greatly improved, especially for highly-tapered cantilever tubular beams.
A boundary-value problem for a first-order hyperbolic system in a two-dimensional domain
NASA Astrophysics Data System (ADS)
Zhura, N. A.; Soldatov, A. P.
2017-06-01
We consider a strictly hyperbolic first-order system of three equations with constant coefficients in a bounded piecewise-smooth domain. The boundary of the domain is assumed to consist of six smooth non-characteristic arcs. A boundary-value problem in this domain is posed by alternately prescribing one or two linear combinations of the components of the solution on these arcs. We show that this problem has a unique solution under certain additional conditions on the coefficients of these combinations, the boundary of the domain and the behaviour of the solution near the characteristics passing through the corner points of the domain.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
2017-01-01
Scheme III (piecewise linear) and V (piecewise parabolic) of Van Leer are shown to yield identical solutions provided the initial conditions are chosen in an appropriate manner. This result is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The result also shows a key connection between the approaches of discontinuous and continuous representations.
Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation
NASA Astrophysics Data System (ADS)
Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin
2018-04-01
Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.
NASA Astrophysics Data System (ADS)
Hilbert, Stefan; Dunkel, Jörn
2006-07-01
We calculate exactly both the microcanonical and canonical thermodynamic functions (TDFs) for a one-dimensional model system with piecewise constant Lennard-Jones type pair interactions. In the case of an isolated N -particle system, the microcanonical TDFs exhibit (N-1) singular (nonanalytic) microscopic phase transitions of the formal order N/2 , separating N energetically different evaporation (dissociation) states. In a suitably designed evaporation experiment, these types of phase transitions should manifest themselves in the form of pressure and temperature oscillations, indicating cooling by evaporation. In the presence of a heat bath (thermostat), such oscillations are absent, but the canonical heat capacity shows a characteristic peak, indicating the temperature-induced dissociation of the one-dimensional chain. The distribution of complex zeros of the canonical partition may be used to identify different degrees of dissociation in the canonical ensemble.
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M.
2017-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft® Excel® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet. PMID:28163564
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M
2016-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft ® Excel ® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet.
A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors
Zhang, Tengfei; Lewis, E. E.; Smith, M. A.; ...
2017-04-18
A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less
A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Tengfei; Lewis, E. E.; Smith, M. A.
A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto
2016-05-01
This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.
MODELING FUNCTIONALLY GRADED INTERPHASE REGIONS IN CARBON NANOTUBE REINFORCED COMPOSITES
NASA Technical Reports Server (NTRS)
Seidel, G. D.; Lagoudas, D. C.; Frankland, S. J. V.; Gates, T. S.
2006-01-01
A combination of micromechanics methods and molecular dynamics simulations are used to obtain the effective properties of the carbon nanotube reinforced composites with functionally graded interphase regions. The multilayer composite cylinders method accounts for the effects of non-perfect load transfer in carbon nanotube reinforced polymer matrix composites using a piecewise functionally graded interphase. The functional form of the properties in the interphase region, as well as the interphase thickness, is derived from molecular dynamics simulations of carbon nanotubes in a polymer matrix. Results indicate that the functional form of the interphase can have a significant effect on all the effective elastic constants except for the effective axial modulus for which no noticeable effects are evident.
WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS
MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN
2013-01-01
Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935
Lyapunov vector function method in the motion stabilisation problem for nonholonomic mobile robot
NASA Astrophysics Data System (ADS)
Andreev, Aleksandr; Peregudova, Olga
2017-07-01
In this paper we propose a sampled-data control law in the stabilisation problem of nonstationary motion of nonholonomic mobile robot. We assume that the robot moves on a horizontal surface without slipping. The dynamical model of a mobile robot is considered. The robot has one front free wheel and two rear wheels which are controlled by two independent electric motors. We assume that the controls are piecewise constant signals. Controller design relies on the backstepping procedure with the use of Lyapunov vector-function method. Theoretical considerations are verified by numerical simulation.
Robust and Quantized Wiener Filters for p-Point Spectral Classes.
1980-01-01
REPORT DOCUMENTATION, __BEFORE COMPLETING FORM A. REPORT NUMBER ’ 12. GOVT ACCESSION NO. 3 . RECIPIENT’S CATALOG NUMBER AFOSR-TR- 80-0425z__...re School of Electrical Engineerin . 3 - , Philadelphia, PA 19104 ABSTRACT In Section III, we show that a piecewise const- ant filter also possesses...determining the optimum piecewise ters using a band-model for the PSD’s. Poor [ 3 , 4] constant filter. Then, for a particular class of then considered
NASA Technical Reports Server (NTRS)
Childs, A. G.
1971-01-01
A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.
Hanni, Matti; Lantto, Perttu; Ilias, Miroslav; Jensen, Hans Jorgen Aagaard; Vaara, Juha
2007-10-28
Relativistic effects on the (129)Xe nuclear magnetic resonance shielding and (131)Xe nuclear quadrupole coupling (NQC) tensors are examined in the weakly bound Xe(2) system at different levels of theory including the relativistic four-component Dirac-Hartree-Fock (DHF) method. The intermolecular interaction-induced binary chemical shift delta, the anisotropy of the shielding tensor Deltasigma, and the NQC constant along the internuclear axis chi( parallel) are calculated as a function of the internuclear distance. DHF shielding calculations are carried out using gauge-including atomic orbitals. For comparison, the full leading-order one-electron Breit-Pauli perturbation theory (BPPT) is applied using a common gauge origin. Electron correlation effects are studied at the nonrelativistic (NR) coupled-cluster singles and doubles with perturbational triples [CCSD(T)] level of theory. The fully relativistic second-order Moller-Plesset many-body perturbation (DMP2) theory is used to examine the cross coupling between correlation and relativity on NQC. The same is investigated for delta and Deltasigma by BPPT with a density functional theory model. A semiquantitative agreement between the BPPT and DHF binary property curves is obtained for delta and Deltasigma in Xe(2). For these properties, the currently most complete theoretical description is obtained by a piecewise approximation where the uncorrelated relativistic DHF results obtained close to the basis-set limit are corrected, on the one hand, for NR correlation effects and, on the other hand, for the BPPT-based cross coupling of relativity and correlation. For chi( parallel), the fully relativistic DMP2 results obtain a correction for NR correlation effects beyond MP2. The computed temperature dependence of the second virial coefficient of the (129)Xe nuclear shielding is compared to experiment in Xe gas. Our best results, obtained with the piecewise approximation for the binary chemical shift combined with the previously published state of the art theoretical potential energy curve for Xe(2), are in excellent agreement with the experiment for the first time.
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2017-01-01
In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM. (R827028)
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme––the piecewise parabolic method (PPM)––for computing advective solution fields; a weight function capable o...
Identification of Piecewise Linear Uniform Motion Blur
NASA Astrophysics Data System (ADS)
Patanukhom, Karn; Nishihara, Akinori
A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.
Ke, Jing; Dou, Hanfei; Zhang, Ximin; Uhagaze, Dushimabararezi Serge; Ding, Xiali; Dong, Yuming
2016-12-01
As a mono-sodium salt form of alendronic acid, alendronate sodium presents multi-level ionization for the dissociation of its four hydroxyl groups. The dissociation constants of alendronate sodium were determined in this work by studying the piecewise linear relationship between volume of titrant and pH value based on acid-base potentiometric titration reaction. The distribution curves of alendronate sodium were drawn according to the determined pKa values. There were 4 dissociation constants (pKa 1 =2.43, pKa 2 =7.55, pKa 3 =10.80, pKa 4 =11.99, respectively) of alendronate sodium, and 12 existing forms, of which 4 could be ignored, existing in different pH environments.
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehtikangas, O., E-mail: Ossi.Lehtikangas@uef.fi; Tarvainen, T.; Department of Computer Science, University College London, Gower Street, London WC1E 6BT
2015-02-01
The radiative transport equation can be used as a light transport model in a medium with scattering particles, such as biological tissues. In the radiative transport equation, the refractive index is assumed to be constant within the medium. However, in biomedical media, changes in the refractive index can occur between different tissue types. In this work, light propagation in a medium with piece-wise constant refractive index is considered. Light propagation in each sub-domain with a constant refractive index is modeled using the radiative transport equation and the equations are coupled using boundary conditions describing Fresnel reflection and refraction phenomena onmore » the interfaces between the sub-domains. The resulting coupled system of radiative transport equations is numerically solved using a finite element method. The approach is tested with simulations. The results show that this coupled system describes light propagation accurately through comparison with the Monte Carlo method. It is also shown that neglecting the internal changes of the refractive index can lead to erroneous boundary measurements of scattered light.« less
Computation of the anharmonic orbits in two piecewise monotonic maps with a single discontinuity
NASA Astrophysics Data System (ADS)
Li, Yurong; Du, Zhengdong
2017-02-01
In this paper, the bifurcation values for two typical piecewise monotonic maps with a single discontinuity are computed. The variation of the parameter of those maps leads to a sequence of border-collision and period-doubling bifurcations, generating a sequence of anharmonic orbits on the boundary of chaos. The border-collision and period-doubling bifurcation values are computed by the word-lifting technique and the Maple fsolve function or the Newton-Raphson method, respectively. The scaling factors which measure the convergent rates of the bifurcation values and the width of the stable periodic windows, respectively, are investigated. We found that these scaling factors depend on the parameters of the maps, implying that they are not universal. Moreover, if one side of the maps is linear, our numerical results suggest that those quantities converge increasingly. In particular, for the linear-quadratic case, they converge to one of the Feigenbaum constants δ _F= 4.66920160\\cdots.
A simple finite element method for the Stokes equations
Mu, Lin; Ye, Xiu
2017-03-21
The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.
A simple finite element method for the Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
Limit cycles via higher order perturbations for some piecewise differential systems
NASA Astrophysics Data System (ADS)
Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan
2018-05-01
A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.
Least Squares Approximation By G1 Piecewise Parametric Cubes
1993-12-01
ADDRESS(ES) 10.SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not...CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (maximum 200 words) Parametric piecewise cubic polynomials are used throughout...piecewise parametric cubic polynomial to a sequence of ordered points in the plane. Cubic Bdzier curves are used as a basis. The parameterization, the
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent
NASA Astrophysics Data System (ADS)
Nec, Yana
2018-01-01
Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).
Statistical methods for investigating quiescence and other temporal seismicity patterns
Matthews, M.V.; Reasenberg, P.A.
1988-01-01
We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.
Solutions of some problems in applied mathematics using MACSYMA
NASA Technical Reports Server (NTRS)
Punjabi, Alkesh; Lam, Maria
1987-01-01
Various Symbolic Manipulation Programs (SMP) were tested to check the functioning of their commands and suitability under various operating systems. Support systems for SMP were found to be relatively better than the one for MACSYMA. The graphics facilities for MACSYMA do not work as expected under the UNIX operating system. Not all commands for MACSYMA function as described in the manuals. Shape representation is a central issue in computer graphics and computer-aided design. Aside from appearance, there are other application dependent, desirable properties like continuity to certain order, symmetry, axis-independence, and variation-diminishing properties. Several shape representations are studied, which include the Osculatory Method, a Piecewise Cubic Polynomial Method using two different slope estimates, Piecewise Cubic Hermite Form, a method by Harry McLaughlin, and a Piecewise Bezier Method. They are applied to collected physical and chemical data. Relative merits and demerits of these methods are examined. Kinematics of a single link, non-dissipative robot arm is studied using MACSYMA. Lagranian is set-up and Lagrange's equations are derived. From there, Hamiltonian equations of motion are obtained. Equations suggest that bifurcation of solutions can occur, depending upon the value of a single parameter. Using the characteristic function W, the Hamilton-Jacobi equation is derived. It is shown that the H-J equation can be solved in closed form. Analytical solutions to the H-J equation are obtained.
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
NASA Astrophysics Data System (ADS)
Wang, Qingzhi; Tan, Guanzheng; He, Yong; Wu, Min
2017-10-01
This paper considers a stability analysis issue of piecewise non-linear systems and applies it to intermittent synchronisation of chaotic systems. First, based on piecewise Lyapunov function methods, more general and less conservative stability criteria of piecewise non-linear systems in periodic and aperiodic cases are presented, respectively. Next, intermittent synchronisation conditions of chaotic systems are derived which extend existing results. Finally, Chua's circuit is taken as an example to verify the validity of our methods.
H∞ control problem of linear periodic piecewise time-delay systems
NASA Astrophysics Data System (ADS)
Xie, Xiaochen; Lam, James; Li, Panshuo
2018-04-01
This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.
Effective Methods for Solving Band SLEs after Parabolic Nonlinear PDEs
NASA Astrophysics Data System (ADS)
Veneva, Milena; Ayriyan, Alexander
2018-04-01
A class of models of heat transfer processes in a multilayer domain is considered. The governing equation is a nonlinear heat-transfer equation with different temperature-dependent densities and thermal coefficients in each layer. Homogeneous Neumann boundary conditions and ideal contact ones are applied. A finite difference scheme on a special uneven mesh with a second-order approximation in the case of a piecewise constant spatial step is built. This discretization leads to a pentadiagonal system of linear equations (SLEs) with a matrix which is neither diagonally dominant, nor positive definite. Two different methods for solving such a SLE are developed - diagonal dominantization and symbolic algorithms.
Mass-corrections for the conservative coupling of flow and transport on collocated meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich
2016-01-15
Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less
A tutorial on the piecewise regression approach applied to bedload transport data
Sandra E. Ryan; Laurie S. Porth
2007-01-01
This tutorial demonstrates the application of piecewise regression to bedload data to define a shift in phase of transport so that the reader may perform similar analyses on available data. The use of piecewise regression analysis implicitly recognizes different functions fit to bedload data over varying ranges of flow. The transition from primarily low rates of sand...
Modifications of the PCPT method for HJB equations
NASA Astrophysics Data System (ADS)
Kossaczký, I.; Ehrhardt, M.; Günther, M.
2016-10-01
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samet Y. Kadioglu
2011-12-01
We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less
NASA Astrophysics Data System (ADS)
Bo, Zhang; Li, Jin-Ling; Wang, Guan-Gli
2002-01-01
We checked the dependence of the estimation of parameters on the choice of piecewise interval in the continuous piecewise linear modeling of the residual clock and atmosphere effects by single analysis of 27 VLBI experiments involving Shanghai station (Seshan 25m). The following are tentatively shown: (1) Different choices of the piecewise interval lead to differences in the estimation of station coordinates and in the weighted root mean squares ( wrms ) of the delay residuals, which can be of the order of centimeters or dozens of picoseconds respectively. So the choice of piecewise interval should not be arbitrary . (2) The piecewise interval should not be too long, otherwise the short - term variations in the residual clock and atmospheric effects can not be properly modeled. While in order to maintain enough degrees of freedom in parameter estimation, the interval can not be too short, otherwise the normal equation may become near or solely singular and the noises can not be constrained as well. Therefore the choice of the interval should be within some reasonable range. (3) Since the conditions of clock and atmosphere are different from experiment to experiment and from station to station, the reasonable range of the piecewise interval should be tested and chosen separately for each experiment as well as for each station by real data analysis. This is really arduous work in routine data analysis. (4) Generally speaking, with the default interval for clock as 60min, the reasonable range of piecewise interval for residual atmospheric effect modeling is between 10min to 40min, while with the default interval for atmosphere as 20min, that for residual clock behavior is between 20min to 100min.
Fault detection for piecewise affine systems with application to ship propulsion systems.
Yang, Ying; Linlin, Li; Ding, Steven X; Qiu, Jianbin; Peng, Kaixiang
2017-09-09
In this paper, the design approach of non-synchronized diagnostic observer-based fault detection (FD) systems is investigated for piecewise affine processes via continuous piecewise Lyapunov functions. Considering that the dynamics of piecewise affine systems in different regions can be considerably different, the weighting matrices are used to weight the residual of each region, so as to optimize the fault detectability. A numerical example and a case study on a ship propulsion system are presented in the end to demonstrate the effectiveness of the proposed results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Control of mechanical systems by the mixed "time and expenditure" criterion
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
The optimal controlled motion of a mechanical system, that is determined by the linear system ODE with constant coefficients and piecewise constant control components, is considered. The number of control switching points and the heights of control steps are considered as preset. The optimized functional is combination of classical time criteria and "Expenditure criteria", that is equal to the total area of all steps of all control components. In the absence of control, the solution of the system is equal to the sum of components (frequency components) corresponding to different eigenvalues of the matrix of the ODE system. Admissible controls are those that turn to zero (at a non predetermined time moment) the previously chosen frequency components of the solution. An algorithm for the finding of control switching points, based on the necessary minimum conditions for mixed criteria, is proposed.
NASA Astrophysics Data System (ADS)
Vjačeslavov, N. S.
1980-02-01
In this paper estimates are found for L_pR_n(f) - the least deviation in the L_p-metric, 0 < p\\leq\\infty, of a piecewise analytic function f from the rational functions of degree at most n. It is shown that these estimates are sharp in a well-defined sense.Bibliography: 12 titles.
Cubic Zig-Zag Enrichment of the Classical Kirchhoff Kinematics for Laminated and Sandwich Plates
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2012-01-01
A detailed anaylsis and examples are presented that show how to enrich the kinematics of classical Kirchhoff plate theory by appending them with a set of continuous piecewise-cubic functions. This analysis is used to obtain functions that contain the effects of laminate heterogeneity and asymmetry on the variations of the inplane displacements and transverse shearing stresses, for use with a {3, 0} plate theory in which these distributions are specified apriori. The functions used for the enrichment are based on the improved zig-zag plate theory presented recently by Tessler, Di Scuva, and Gherlone. With the approach presented herein, the inplane displacements are represented by a set of continuous piecewise-cubic functions, and the transverse shearing stresses and strains are represented by a set of piecewise-quadratic functions that are discontinuous at the ply interfaces.
NASA Technical Reports Server (NTRS)
Maliassov, Serguei
1996-01-01
In this paper an algebraic substructuring preconditioner is considered for nonconforming finite element approximations of second order elliptic problems in 3D domains with a piecewise constant diffusion coefficient. Using a substructuring idea and a block Gauss elimination, part of the unknowns is eliminated and the Schur complement obtained is preconditioned by a spectrally equivalent very sparse matrix. In the case of quasiuniform tetrahedral mesh an appropriate algebraic multigrid solver can be used to solve the problem with this matrix. Explicit estimates of condition numbers and implementation algorithms are established for the constructed preconditioner. It is shown that the condition number of the preconditioned matrix does not depend on either the mesh step size or the jump of the coefficient. Finally, numerical experiments are presented to illustrate the theory being developed.
Piecewise Geometric Estimation of a Survival Function.
1985-04-01
Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16
Weinmann, Andreas; Storath, Martin
2015-01-01
Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074
A partially penalty immersed Crouzeix-Raviart finite element method for interface problems.
An, Na; Yu, Xijun; Chen, Huanzhen; Huang, Chaobao; Liu, Zhongyan
2017-01-01
The elliptic equations with discontinuous coefficients are often used to describe the problems of the multiple materials or fluids with different densities or conductivities or diffusivities. In this paper we develop a partially penalty immersed finite element (PIFE) method on triangular grids for anisotropic flow models, in which the diffusion coefficient is a piecewise definite-positive matrix. The standard linear Crouzeix-Raviart type finite element space is used on non-interface elements and the piecewise linear Crouzeix-Raviart type immersed finite element (IFE) space is constructed on interface elements. The piecewise linear functions satisfying the interface jump conditions are uniquely determined by the integral averages on the edges as degrees of freedom. The PIFE scheme is given based on the symmetric, nonsymmetric or incomplete interior penalty discontinuous Galerkin formulation. The solvability of the method is proved and the optimal error estimates in the energy norm are obtained. Numerical experiments are presented to confirm our theoretical analysis and show that the newly developed PIFE method has optimal-order convergence in the [Formula: see text] norm as well. In addition, numerical examples also indicate that this method is valid for both the isotropic and the anisotropic elliptic interface problems.
Hamiltonian flows with random-walk behaviour originating from zero-sum games and fictitious play
NASA Astrophysics Data System (ADS)
van Strien, Sebastian
2011-06-01
In this paper we introduce Hamiltonian dynamics, inspired by zero-sum games (best response and fictitious play dynamics). The Hamiltonian functions we consider are continuous and piecewise affine (and of a very simple form). It follows that the corresponding Hamiltonian vector fields are discontinuous and multi-valued. Differential equations with discontinuities along a hyperplane are often called 'Filippov systems', and there is a large literature on such systems, see for example (di Bernardo et al 2008 Theory and applications Piecewise-Smooth Dynamical Systems (Applied Mathematical Sciences vol 163) (London: Springer); Kunze 2000 Non-Smooth Dynamical Systems (Lecture Notes in Mathematics vol 1744) (Berlin: Springer); Leine and Nijmeijer 2004 Dynamics and Bifurcations of Non-smooth Mechanical Systems (Lecture Notes in Applied and Computational Mechanics vol 18) (Berlin: Springer)). The special feature of the systems we consider here is that they have discontinuities along a large number of intersecting hyperplanes. Nevertheless, somewhat surprisingly, the flow corresponding to such a vector field exists, is unique and continuous. We believe that these vector fields deserve attention, because it turns out that the resulting dynamics are rather different from those found in more classically defined Hamiltonian dynamics. The vector field is extremely simple: outside codimension-one hyperplanes it is piecewise constant and so the flow phit piecewise a translation (without stationary points). Even so, the dynamics can be rather rich and complicated as a detailed study of specific examples show (see for example theorems 7.1 and 7.2 and also (Ostrovski and van Strien 2011 Regular Chaotic Dynf. 16 129-54)). In the last two sections of the paper we give some applications to game theory, and finish with posing a version of the Palis conjecture in the context of the class of non-smooth systems studied in this paper. To Jacob Palis on his 70th birthday.
Step-by-step integration for fractional operators
NASA Astrophysics Data System (ADS)
Colinas-Armijo, Natalia; Di Paola, Mario
2018-06-01
In this paper, an approach based on the definition of the Riemann-Liouville fractional operators is proposed in order to provide a different discretisation technique as alternative to the Grünwald-Letnikov operators. The proposed Riemann-Liouville discretisation consists of performing step-by-step integration based upon the discretisation of the function f(t). It has been shown that, as f(t) is discretised as stepwise or piecewise function, the Riemann-Liouville fractional integral and derivative are governing by operators very similar to the Grünwald-Letnikov operators. In order to show the accuracy and capabilities of the proposed Riemann-Liouville discretisation technique and the Grünwald-Letnikov discrete operators, both techniques have been applied to: unit step functions, exponential functions and sample functions of white noise.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Use of autocorrelation scanning in DNA copy number analysis.
Zhang, Liangcai; Zhang, Li
2013-11-01
Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1994-01-01
A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
Observational constraint on dynamical evolution of dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Yungui; Cai, Rong-Gen; Chen, Yun
2010-01-01
We use the Constitution supernova, the baryon acoustic oscillation, the cosmic microwave background, and the Hubble parameter data to analyze the evolution property of dark energy. We obtain different results when we fit different baryon acoustic oscillation data combined with the Constitution supernova data to the Chevallier-Polarski-Linder model. We find that the difference stems from the different values of Ω{sub m0}. We also fit the observational data to the model independent piecewise constant parametrization. Four redshift bins with boundaries at z = 0.22, 0.53, 0.85 and 1.8 were chosen for the piecewise constant parametrization of the equation of state parametermore » w(z) of dark energy. We find no significant evidence for evolving w(z). With the addition of the Hubble parameter, the constraint on the equation of state parameter at high redshift is improved by 70%. The marginalization of the nuisance parameter connected to the supernova distance modulus is discussed.« less
NASA Astrophysics Data System (ADS)
Anderson, Daniel M.; McLaughlin, Richard M.; Miller, Cass T.
2018-02-01
We examine a mathematical model of one-dimensional draining of a fluid through a periodically-layered porous medium. A porous medium, initially saturated with a fluid of a high density is assumed to drain out the bottom of the porous medium with a second lighter fluid replacing the draining fluid. We assume that the draining layer is sufficiently dense that the dynamics of the lighter fluid can be neglected with respect to the dynamics of the heavier draining fluid and that the height of the draining fluid, represented as a free boundary in the model, evolves in time. In this context, we neglect interfacial tension effects at the boundary between the two fluids. We show that this problem admits an exact solution. Our primary objective is to develop a homogenization theory in which we find not only leading-order, or effective, trends but also capture higher-order corrections to these effective draining rates. The approximate solution obtained by this homogenization theory is compared to the exact solution for two cases: (1) the permeability of the porous medium varies smoothly but rapidly and (2) the permeability varies as a piecewise constant function representing discrete layers of alternating high/low permeability. In both cases we are able to show that the corrections in the homogenization theory accurately predict the position of the free boundary moving through the porous medium.
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
NASA Astrophysics Data System (ADS)
Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.
2017-11-01
We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
NASA Astrophysics Data System (ADS)
Majewski, Kurt
2018-03-01
Exact solutions of the Bloch equations with T1 - and T2 -relaxation terms for piecewise constant magnetic fields are numerically challenging. We therefore investigate an approximation for the achieved magnetization in which rotations and relaxations are split into separate operations. We develop an estimate for its accuracy and explicit first and second order derivatives with respect to the complex excitation radio frequency voltages. In practice, the deviation between an exact solution of the Bloch equations and this rotation relaxation splitting approximation seems negligible. Its computation times are similar to exact solutions without relaxation terms. We apply the developed theory to numerically optimize radio frequency excitation waveforms with T1 - and T2 -relaxations in several examples.
Sim, K S; Yeap, Z X; Tso, C P
2016-11-01
An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
Exponentially accurate approximations to piece-wise smooth periodic functions
NASA Technical Reports Server (NTRS)
Greer, James; Banerjee, Saheb
1995-01-01
A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.
Limit Cycle Bifurcations by Perturbing a Piecewise Hamiltonian System with a Double Homoclinic Loop
NASA Astrophysics Data System (ADS)
Xiong, Yanqin
2016-06-01
This paper is concerned with the bifurcation problem of limit cycles by perturbing a piecewise Hamiltonian system with a double homoclinic loop. First, the derivative of the first Melnikov function is provided. Then, we use it, together with the analytic method, to derive the asymptotic expansion of the first Melnikov function near the loop. Meanwhile, we present the first coefficients in the expansion, which can be applied to study the limit cycle bifurcation near the loop. We give sufficient conditions for this system to have 14 limit cycles in the neighborhood of the loop. As an application, a piecewise polynomial Liénard system is investigated, finding six limit cycles with the help of the obtained method.
NASA Astrophysics Data System (ADS)
Gonzales, Matthew Alejandro
The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
ERIC Educational Resources Information Center
Sinclair, Nathalie; Armstrong, Alayne
2011-01-01
Piecewise linear functions and story graphs are concepts usually associated with algebra, but in the authors' classroom, they found success teaching this topic in a distinctly geometrical manner. The focus of the approach was less on learning geometric concepts and more on using spatial and kinetic reasoning. It not only supports the learning of…
Three-Dimensional Piecewise-Continuous Class-Shape Transformation of Wings
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
Class-Shape Transformation (CST) is a popular method for creating analytical representations of the surface coordinates of various components of aerospace vehicles. A wide variety of two- and three-dimensional shapes can be represented analytically using only a modest number of parameters, and the surface representation is smooth and continuous to as fine a degree as desired. This paper expands upon the original two-dimensional representation of airfoils to develop a generalized three-dimensional CST parametrization scheme that is suitable for a wider range of aircraft wings than previous formulations, including wings with significant non-planar shapes such as blended winglets and box wings. The method uses individual functions for the spanwise variation of airfoil shape, chord, thickness, twist, and reference axis coordinates to build up the complete wing shape. An alternative formulation parameterizes the slopes of the reference axis coordinates in order to relate the spanwise variation to the tangents of the sweep and dihedral angles. Also discussed are methods for fitting existing wing surface coordinates, including the use of piecewise equations to handle discontinuities, and mathematical formulations of geometric continuity constraints. A subsonic transport wing model is used as an example problem to illustrate the application of the methodology and to quantify the effects of piecewise representation and curvature constraints.
Application of Markov Models for Analysis of Development of Psychological Characteristics
ERIC Educational Resources Information Center
Kuravsky, Lev S.; Malykh, Sergey B.
2004-01-01
A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…
Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules
He, Peng; Wei, Biao; Wang, Steve; ...
2013-01-01
Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT) theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI) from truncated data in amore » theoretically exact fashion via the total variation (TV) minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP) reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen.« less
Puso, M. A.; Kokko, E.; Settgast, R.; ...
2014-10-22
An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less
Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J
2018-01-30
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
NASA Astrophysics Data System (ADS)
Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-02-01
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh
2011-03-15
Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing basedmore » interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of {approx}500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.« less
Staley, James R; Burgess, Stephen
2017-05-01
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
Hybrid LES/RANS Simulation of Transverse Sonic Injection into a Mach 2 Flow
NASA Technical Reports Server (NTRS)
Boles, John A.; Edwards, Jack R.; Baurle, Robert A.
2008-01-01
A computational study of transverse sonic injection of air and helium into a Mach 1.98 cross-flow is presented. A hybrid large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) turbulence model is used, with the two-equation Menter baseline (Menter-BSL) closure for the RANS part of the flow and a Smagorinsky-type model for the LES part of the flow. A time-dependent blending function, dependent on modeled turbulence variables, is used to shift the closure from RANS to LES. Turbulent structures are initiated and sustained through the use of a recycling / rescaling technique. Two higher-order discretizations, the Piecewise Parabolic Method (PPM) of Colella and Woodward, and the SONIC-A ENO scheme of Suresh and Huyhn are used in the study. The results using the hybrid model show reasonably good agreement with time-averaged Mie scattering data and with experimental surface pressure distributions, even though the penetration of the jet into the cross-flow is slightly over-predicted. The LES/RANS results are used to examine the validity of commonly-used assumptions of constant Schmidt and Prandtl numbers in the intense mixing zone downstream of the injection location.
A high-order gas-kinetic Navier-Stokes flow solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qibing, E-mail: lqb@tsinghua.edu.c; Xu Kun, E-mail: makxu@ust.h; Fu Song, E-mail: fs-dem@tsinghua.edu.c
2010-09-20
The foundation for the development of modern compressible flow solver is based on the Riemann solution of the inviscid Euler equations. The high-order schemes are basically related to high-order spatial interpolation or reconstruction. In order to overcome the low-order wave interaction mechanism due to the Riemann solution, the temporal accuracy of the scheme can be improved through the Runge-Kutta method, where the dynamic deficiencies in the first-order Riemann solution is alleviated through the sub-step spatial reconstruction in the Runge-Kutta process. The close coupling between the spatial and temporal evolution in the original nonlinear governing equations seems weakened due to itsmore » spatial and temporal decoupling. Many recently developed high-order methods require a Navier-Stokes flux function under piece-wise discontinuous high-order initial reconstruction. However, the piece-wise discontinuous initial data and the hyperbolic-parabolic nature of the Navier-Stokes equations seem inconsistent mathematically, such as the divergence of the viscous and heat conducting terms due to initial discontinuity. In this paper, based on the Boltzmann equation, we are going to present a time-dependent flux function from a high-order discontinuous reconstruction. The theoretical basis for such an approach is due to the fact that the Boltzmann equation has no specific requirement on the smoothness of the initial data and the kinetic equation has the mechanism to construct a dissipative wave structure starting from an initially discontinuous flow condition on a time scale being larger than the particle collision time. The current high-order flux evaluation method is an extension of the second-order gas-kinetic BGK scheme for the Navier-Stokes equations (BGK-NS). The novelty for the easy extension from a second-order to a higher order is due to the simple particle transport and collision mechanism on the microscopic level. This paper will present a hierarchy to construct such a high-order method. The necessity to couple spatial and temporal evolution nonlinearly in the flux evaluation can be clearly observed through the numerical performance of the scheme for the viscous flow computations.« less
Wang, Chunhua; Liu, Xiaoming; Xia, Hu
2017-03-01
In this paper, two kinds of novel ideal active flux-controlled smooth multi-piecewise quadratic nonlinearity memristors with multi-piecewise continuous memductance function are presented. The pinched hysteresis loop characteristics of the two memristor models are verified by building a memristor emulator circuit. Using the two memristor models establish a new memristive multi-scroll Chua's circuit, which can generate 2N-scroll and 2N+1-scroll chaotic attractors without any other ordinary nonlinear function. Furthermore, coexisting multi-scroll chaotic attractors are found in the proposed memristive multi-scroll Chua's circuit. Phase portraits, Lyapunov exponents, bifurcation diagrams, and equilibrium point analysis have been used to research the basic dynamics of the memristive multi-scroll Chua's circuit. The consistency of circuit implementation and numerical simulation verifies the effectiveness of the system design.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Quadratic spline subroutine package
Rasmussen, Lowell A.
1982-01-01
A continuous piecewise quadratic function with continuous first derivative is devised for approximating a single-valued, but unknown, function represented by a set of discrete points. The quadratic is proposed as a treatment intermediate between using the angular (but reliable, easily constructed and manipulated) piecewise linear function and using the smoother (but occasionally erratic) cubic spline. Neither iteration nor the solution of a system of simultaneous equations is necessary to determining the coefficients. Several properties of the quadratic function are given. A set of five short FORTRAN subroutines is provided for generating the coefficients (QSC), finding function value and derivatives (QSY), integrating (QSI), finding extrema (QSE), and computing arc length and the curvature-squared integral (QSK). (USGS)
Variational models for discontinuity detection
NASA Astrophysics Data System (ADS)
Vitti, Alfonso; Battista Benciolini, G.
2010-05-01
The Mumford-Shah variational model produces a smooth approximation of the data and detects data discontinuities by solving a minimum problem involving an energy functional. The Blake-Zisserman model permits also the detection of discontinuities in the first derivative of the approximation. This model can result in a quasi piece-wise linear approximation, whereas the Mumford-Shah can result in a quasi piece-wise constant approximation. The two models are well known in the mathematical literature and are widely adopted in computer vision for image segmentation. In Geodesy the Blake-Zisserman model has been applied successfully to the detection of cycle-slips in linear combinations of GPS measurements. Few attempts to apply the model to time series of coordinates have been done so far. The problem of detecting discontinuities in time series of GNSS coordinates is well know and its relevance increases as the quality of geodetic measurements, analysis techniques, models and products improves. The application of the Blake-Zisserman model appears reasonable and promising due to the model characteristic to detect both position and velocity discontinuities in the same time series. The detection of position and velocity changes is of great interest in geophysics where the discontinuity itself can be the very relevant object. In the work for the realization of reference frames, detecting position and velocity discontinuities may help to define models that can handle non-linear motions. In this work the Mumford-Shah and the Blake-Zisserman models are briefly presented, the treatment is carried out from a practical viewpoint rather than from a theoretical one. A set of time series of GNSS coordinates has been processed and the results are presented in order to highlight the capabilities and the weakness of the variational approach. A first attempt to derive some indication for the automatic set up of the model parameters has been done. The underlying relation that could links the parameter values to the statistical properties of the data has been investigated.
An updated Lagrangian discontinuous Galerkin hydrodynamic method for gas dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Tong; Shashkov, Mikhail Jurievich; Morgan, Nathaniel Ray
Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for gas dynamics. The new method evolves conserved unknowns in the current configuration, which obviates the Jacobi matrix that maps the element in a reference coordinate system or the initial coordinate system to the current configuration. The density, momentum, and total energy (ρ, ρu, E) are approximated with conservative higher-order Taylor expansions over the element and are limited toward a piecewise constant field near discontinuities using a limiter. Two new limiting methods are presented for enforcing the bounds on the primitive variables of density, velocity, and specific internal energymore » (ρ, u, e). The nodal velocity, and the corresponding forces, are calculated by solving an approximate Riemann problem at the element nodes. An explicit second-order method is used to temporally advance the solution. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. 1D Cartesian coordinates test problem results are presented to demonstrate the accuracy and convergence order of the new DG method with the new limiters.« less
An updated Lagrangian discontinuous Galerkin hydrodynamic method for gas dynamics
Wu, Tong; Shashkov, Mikhail Jurievich; Morgan, Nathaniel Ray; ...
2018-04-09
Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for gas dynamics. The new method evolves conserved unknowns in the current configuration, which obviates the Jacobi matrix that maps the element in a reference coordinate system or the initial coordinate system to the current configuration. The density, momentum, and total energy (ρ, ρu, E) are approximated with conservative higher-order Taylor expansions over the element and are limited toward a piecewise constant field near discontinuities using a limiter. Two new limiting methods are presented for enforcing the bounds on the primitive variables of density, velocity, and specific internal energymore » (ρ, u, e). The nodal velocity, and the corresponding forces, are calculated by solving an approximate Riemann problem at the element nodes. An explicit second-order method is used to temporally advance the solution. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. 1D Cartesian coordinates test problem results are presented to demonstrate the accuracy and convergence order of the new DG method with the new limiters.« less
Two-body loss rates for reactive collisions of cold atoms
NASA Astrophysics Data System (ADS)
Cop, C.; Walser, R.
2018-01-01
We present an effective two-channel model for reactive collisions of cold atoms. It augments elastic molecular channels with an irreversible, inelastic loss channel. Scattering is studied with the distorted-wave Born approximation and yields general expressions for angular momentum resolved cross sections as well as two-body loss rates. Explicit expressions are obtained for piecewise constant potentials. A pole expansion reveals simple universal shape functions for cross sections and two-body loss rates in agreement with the Wigner threshold laws. This is applied to collisions of metastable 20Ne and 21Ne atoms, which decay primarily through exothermic Penning or associative ionization processes. From a numerical solution of the multichannel Schrödinger equation using the best currently available molecular potentials, we have obtained synthetic scattering data. Using the two-body loss shape functions derived in this paper, we can match these scattering data very well.
Optimal Portfolio Selection Under Concave Price Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less
Existence of almost periodic solutions for forced perturbed systems with piecewise constant argument
NASA Astrophysics Data System (ADS)
Xia, Yonghui; Huang, Zhenkun; Han, Maoan
2007-09-01
Certain almost periodic forced perturbed systems with piecewise argument are considered in this paper. By using the contraction mapping principle and some new analysis technique, some sufficient conditions are obtained for the existence and uniqueness of almost periodic solution of these systems. Furthermore, we study the harmonic and subharmonic solutions of these systems. The obtained results generalize the previous known results such as [A.M. Fink, Almost Periodic Differential Equation, Lecture Notes in Math., volE 377, Springer-Verlag, Berlin, 1974; C.Y. He, Almost Periodic Differential Equations, Higher Education Press, Beijing, 1992 (in Chinese); Z.S. Lin, The existence of almost periodic solution of linear system, Acta Math. Sinica 22 (5) (1979) 515-528 (in Chinese); C.Y. He, Existence of almost periodic solutions of perturbation systems, Ann. Differential Equations 9 (2) (1992) 173-181; Y.H. Xia, M. Lin, J. Cao, The existence of almost periodic solutions of certain perturbation system, J. Math. Anal. Appl. 310 (1) (2005) 81-96]. Finally, a tangible example and its numeric simulations show the feasibility of our results, the comparison between non-perturbed system and perturbed system, the relation between systems with and without piecewise argument.
A piecewise mass-spring-damper model of the human breast.
Cai, Yiqing; Chen, Lihua; Yu, Winnie; Zhou, Jie; Wan, Frances; Suh, Minyoung; Chow, Daniel Hung-Kay
2018-01-23
Previous models to predict breast movement whilst performing physical activities have, erroneously, assumed uniform elasticity within the breast. Consequently, the predicted displacements have not yet been satisfactorily validated. In this study, real time motion capture of the natural vibrations of a breast that followed, after raising and allowing it to fall freely, revealed an obvious difference in the vibration characteristics above and below the static equilibrium position. This implied that the elastic and viscous damping properties of a breast could vary under extension or compression. Therefore, a new piecewise mass-spring-damper model of a breast was developed with theoretical equations to derive values for its spring constants and damping coefficients from free-falling breast experiments. The effective breast mass was estimated from the breast volume extracted from a 3D body scanned image. The derived spring constant (k a = 73.5 N m -1 ) above the static equilibrium position was significantly smaller than that below it (k b = 658 N m -1 ), whereas the respective damping coefficients were similar (c a = 1.83 N s m -1 , c b = 2.07 N s m -1 ). These values were used to predict the nipple displacement during bare-breasted running for validation. The predicted and experimental results had a 2.6% or less root-mean-square-error of the theoretical and experimental amplitudes, so the piecewise mass-spring-damper model and equations were considered to have been successfully validated. This provides a theoretical basis for further research into the dynamic, nonlinear viscoelastic properties of different breasts and the prediction of external forces for the necessary breast support during different sports activities. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
An Ensemble of Neural Networks for Stock Trading Decision Making
NASA Astrophysics Data System (ADS)
Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming
Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.
NASA Astrophysics Data System (ADS)
Bremer, James
2018-05-01
We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2013-01-01
Large deformation displacement transfer functions were formulated for deformed shape predictions of highly flexible slender structures like aircraft wings. In the formulation, the embedded beam (depth wise cross section of structure along the surface strain sensing line) was first evenly discretized into multiple small domains, with surface strain sensing stations located at the domain junctures. Thus, the surface strain (bending strains) variation within each domain could be expressed with linear of nonlinear function. Such piecewise approach enabled piecewise integrations of the embedded beam curvature equations [classical (Eulerian), physical (Lagrangian), and shifted curvature equations] to yield closed form slope and deflection equations in recursive forms.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran; Lung, Shun-Fat
2017-01-01
For shape predictions of structures under large geometrically nonlinear deformations, Curved Displacement Transfer Functions were formulated based on a curved displacement, traced by a material point from the undeformed position to deformed position. The embedded beam (depth-wise cross section of a structure along a surface strain-sensing line) was discretized into multiple small domains, with domain junctures matching the strain-sensing stations. Thus, the surface strain distribution could be described with a piecewise linear or a piecewise nonlinear function. The discretization approach enabled piecewise integrations of the embedded-beam curvature equations to yield the Curved Displacement Transfer Functions, expressed in terms of embedded beam geometrical parameters and surface strains. By entering the surface strain data into the Displacement Transfer Functions, deflections along each embedded beam can be calculated at multiple points for mapping the overall structural deformed shapes. Finite-element linear and nonlinear analyses of a tapered cantilever tubular beam were performed to generate linear and nonlinear surface strains and the associated deflections to be used for validation. The shape prediction accuracies were then determined by comparing the theoretical deflections with the finiteelement- generated deflections. The results show that the newly developed Curved Displacement Transfer Functions are very accurate for shape predictions of structures under large geometrically nonlinear deformations.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
NASA Astrophysics Data System (ADS)
Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel
2015-06-01
We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.
NASA Astrophysics Data System (ADS)
Li, G.; Gordon, I. E.; Rothman, L. S.; Tan, Y.; Hu, S.-M.; Kassi, S.; Campargue, A.
2014-06-01
In order to improve and extend the existing HITRAN database1 and HITEMP2data for carbon monoxide, the ro-vibrational line lists were computed for all transitions of nine isotopologues of the CO molecule, namely 12C16O, 12C17O, 12C18O, 13C16O, 13C17O, 13C18O, 14C16O, 14C17O, and 14C18O in the electronic ground state up to v = 41 and J = 150. Line positions and intensity calculations were carried out using a newly-determined piece-wise dipole moment function (DMF) in conjunction with the wavefunctions calculated from a previous experimentally-determined potential energy function of Coxon and Hajigeorgiou3. Ab initio calculations and a direct-fit method which simultaneously fits all the reliable experimental ro-vibrational matrix elements has been used to construct the piecewise dipole moment function. To provide additional input parameters into the fit, new Cavity Ring Down Spectroscopy experiments were carried out to enable measurements of the lines in the 4-0 band with low uncertainty (Grenoble) as well as the first measurements of lines in the 6-0 band (Hefei). Accurate partition sums have been derived through direct summation for a temperature range from 1 to 9000 Kelvin. A complete set of broadening and shift parameters is also provided and now include parameters induced by CO2 and H2 in order to aid planetary applications. as part of the GNU EPrints system
Perturbations of Jacobi polynomials and piecewise hypergeometric orthogonal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neretin, Yu A
2006-12-31
A family of non-complete orthogonal systems of functions on the ray [0,{infinity}] depending on three real parameters {alpha}, {beta}, {theta} is constructed. The elements of this system are piecewise hypergeometric functions with singularity at x=1. For {theta}=0 these functions vanish on [1,{infinity}) and the system is reduced to the Jacobi polynomials P{sub n}{sup {alpha}}{sup ,{beta}} on the interval [0,1]. In the general case the functions constructed can be regarded as an interpretation of the expressions P{sub n+{theta}}{sup {alpha}}{sup ,{beta}}. They are eigenfunctions of an exotic Sturm-Liouville boundary-value problem for the hypergeometric differential operator. The spectral measure for this problem ismore » found.« less
NASA Astrophysics Data System (ADS)
Ren, Xiaodong; Xu, Kun; Shyy, Wei
2016-07-01
This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
NASA Astrophysics Data System (ADS)
Marchenko, I. G.; Marchenko, I. I.; Zhiglo, A. V.
2018-01-01
We present a study of the diffusion enhancement of underdamped Brownian particles in a one-dimensional symmetric space-periodic potential due to external symmetric time-periodic driving with zero mean. We show that the diffusivity can be enhanced by many orders of magnitude at an appropriate choice of the driving amplitude and frequency. The diffusivity demonstrates abnormal (decreasing) temperature dependence at the driving amplitudes exceeding a certain value. At any fixed driving frequency Ω normal temperature dependence of the diffusivity is restored at low enough temperatures, T
Sorption kinetics and isotherm studies of a cationic dye using agricultural waste: broad bean peels.
Hameed, B H; El-Khaiary, M I
2008-06-15
In this paper, broad bean peels (BBP), an agricultural waste, was evaluated for its ability to remove cationic dye (methylene blue) from aqueous solutions. Batch mode experiments were conducted at 30 degrees C. Equilibrium sorption isotherms and kinetics were investigated. The kinetic data obtained at different concentrations have been analyzed using pseudo-first-order, pseudo-second-order and intraparticle diffusion equations. The experimental data fitted very well the pseudo-first-order kinetic model. Analysis of the temportal change of q indicates that at the beginning of the process the overall rate of adsorption is controlled by film-diffusion, then at later stage intraparticle-diffusion controls the rate. Diffusion coefficients and times of transition from film to pore-diffusion control were estimated by piecewise linear regression. The experimental data were analyzed by the Langmuir and Freundlich models. The sorption isotherm data fitted well to Langmuir isotherm and the monolayer adsorption capacity was found to be 192.7 mg/g and the equilibrium adsorption constant Ka is 0.07145 l/mg at 30 degrees C. The results revealed that BBP was a promising sorbent for the removal of methylene blue from aqueous solutions.
Modeling of electrical capacitance tomography with the use of complete electrode model
NASA Astrophysics Data System (ADS)
Fang, Weifu
2016-10-01
We introduce the complete electrode model in the modeling of electrical capacitance tomography (ECT), which extends the model with the commonly used model for electrodes. We show that the solution of the complete electrode model approaches the solution of the corresponding common electrode model as the impedance effect on the electrodes vanishes. We also derive the nonlinear relation between capacitance and permitivity and the sensitivity maps with respect to both the permittivity and the impedance constants, and present a finite difference scheme in polar coordinates for the case of circular ECT sensors that retains the continuity of displacement current with piecewise-constant permitivities.
Separated Component-Based Restoration of Speckled SAR Images
2014-01-01
One of the simplest approaches for speckle noise reduction is known as multi-look processing. It involves non-coherently summing the independent...image is assumed to be piecewise smooth [21], [22], [23]. It has been shown that TV regular- ization often yields images with the stair -casing effect...as a function f , is to be decomposed into a sum of two components f = u+ v, where u represents the cartoon or geometric (i.e. piecewise smooth
Theodorakis, Stavros
2003-06-01
We emulate the cubic term Psi(3) in the nonlinear Schrödinger equation by a piecewise linear term, thus reducing the problem to a set of uncoupled linear inhomogeneous differential equations. The resulting analytic expressions constitute an excellent approximation to the exact solutions, as is explicitly shown in the case of the kink, the vortex, and a delta function trap. Such a piecewise linear emulation can be used for any differential equation where the only nonlinearity is a Psi(3) one. In particular, it can be used for the nonlinear Schrödinger equation in the presence of harmonic traps, giving analytic Bose-Einstein condensate solutions that reproduce very accurately the numerically calculated ones in one, two, and three dimensions.
NASA Astrophysics Data System (ADS)
Goryk, A. V.; Koval'chuk, S. B.
2018-05-01
An exact elasticity theory solution for the problem on plane bending of a narrow layered composite cantilever beam by tangential and normal loads distributed on its free end is presented. Components of the stress-strain state are found for the whole layers package by directly integrating differential equations of the plane elasticity theory problem by using an analytic representation of piecewise constant functions of the mechanical characteristics of layer materials. The continuous solution obtained is realized for a four-layer beam with account of kinematic boundary conditions simulating the rigid fixation of its one end. The solution obtained allows one to predict the strength and stiffness of composite cantilever beams and to construct applied analytical solutions for various problems on the elastic bending of layered beams.
Concentric layered Hermite scatterers
NASA Astrophysics Data System (ADS)
Astheimer, Jeffrey P.; Parker, Kevin J.
2018-05-01
The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.
Building an Understanding of Functions: A Series of Activities for Pre-Calculus
ERIC Educational Resources Information Center
Carducci, Olivia M.
2008-01-01
Building block toys can be used to illustrate various concepts connected with functions including graphs and rates of change of linear and exponential functions, piecewise functions, and composition of functions. Five brief activities suitable for a pre-calculus course are described.
Balance Contrast Enhancement using piecewise linear stretching
NASA Astrophysics Data System (ADS)
Rahavan, R. V.; Govil, R. C.
1993-04-01
Balance Contrast Enhancement is one of the techniques employed to produce color composites with increased color contrast. It equalizes the three images used for color composition in range and mean. This results in a color composite with large variation in hue. Here, it is shown that piecewise linear stretching can be used for performing the Balance Contrast Enhancement. In comparison with the Balance Contrast Enhancement Technique using parabolic segment as transfer function (BCETP), the method presented here is algorithmically simple, constraint-free and produces comparable results.
1984-07-01
piecewise constant energy dependence. This is a seven-dimensional problem with time dependence, three spatial and two angular or directional variables and...in extending the computer implementation of the method to time and energy dependent problems, and to solving and validating this technique on a...problems they have severe limitations. The Monte Carlo method, usually requires the use of many hours of expensive computer time , and for deep
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Discontinuous dual-primal mixed finite elements for elliptic problems
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Micheletti, Stefano; Sacco, Riccardo
2000-01-01
We propose a novel discontinuous mixed finite element formulation for the solution of second-order elliptic problems. Fully discontinuous piecewise polynomial finite element spaces are used for the trial and test functions. The discontinuous nature of the test functions at the element interfaces allows to introduce new boundary unknowns that, on the one hand enforce the weak continuity of the trial functions, and on the other avoid the need to define a priori algorithmic fluxes as in standard discontinuous Galerkin methods. Static condensation is performed at the element level, leading to a solution procedure based on the sole interface unknowns. The resulting family of discontinuous dual-primal mixed finite element methods is presented in the one and two-dimensional cases. In the one-dimensional case, we show the equivalence of the method with implicit Runge-Kutta schemes of the collocation type exhibiting optimal behavior. Numerical experiments in one and two dimensions demonstrate the order accuracy of the new method, confirming the results of the analysis.
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
Curvature and frontier orbital energies in density functional theory
NASA Astrophysics Data System (ADS)
Kronik, Leeor; Stein, Tamar; Autschbach, Jochen; Govind, Niranjan; Baer, Roi
2013-03-01
Perdew et al. [Phys. Rev. Lett 49, 1691 (1982)] discovered and proved two different properties of exact Kohn-Sham density functional theory (DFT): (i) The exact total energy versus particle number is a series of linear segments between integer electron points; (ii) Across an integer number of electrons, the exchange-correlation potential may ``jump'' by a constant, known as the derivative discontinuity (DD). Here, we show analytically that in both the original and the generalized Kohn-Sham formulation of DFT, the two are in fact two sides of the same coin. Absence of a derivative discontinuity necessitates deviation from piecewise linearity, and the latter can be used to correct for the former, thereby restoring the physical meaning of the orbital energies. Using selected small molecules, we show that this results in a simple correction scheme for any underlying functional, including semi-local and hybrid functionals as well as Hartree-Fock theory, suggesting a practical correction for the infamous gap problem of DFT. Moreover, we show that optimally-tuned range-separated hybrid functionals can inherently minimize both DD and curvature, thus requiring no correction, and show that this can be used as a sound theoretical basis for novel tuning strategies.
NASA Technical Reports Server (NTRS)
Erickson, Gary E.
2010-01-01
Response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration at supersonic speeds in the NASA LaRC Unitary Plan Wind Tunnel. The Mach 3 staging was dominated by shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. The inference space was partitioned into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using central composite designs capable of fitting full second-order response functions. The underlying aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle were estimated using piecewise-continuous lower-order polynomial functions. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. Augmenting the central composite designs to full third-order using computer-generated D-optimality criteria was evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting lower-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.
A hazard rate analysis of fertility using duration data from Malaysia.
Chang, C
1988-01-01
Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.
Topics in electromagnetic, acoustic, and potential scattering theory
NASA Astrophysics Data System (ADS)
Nuntaplook, Umaporn
With recent renewed interest in the classical topics of both acoustic and electromagnetic aspects for nano-technology, transformation optics, fiber optics, metamaterials with negative refractive indices, cloaking and invisibility, the topic of time-independent scattering theory in quantum mechanics is becoming a useful field to re-examine in the above contexts. One of the key areas of electromagnetic theory scattering of plane electromagnetic waves --- is based on the properties of the refractive indices in the various media. It transpires that the refractive index of a medium and the potential in quantum scattering theory are intimately related. In many cases, understanding such scattering in radially symmetric media is sufficient to gain insight into scattering in more complex media. Meeting the challenge of variable refractive indices and possibly complicated boundary conditions therefore requires accurate and efficient numerical methods, and where possible, analytic solutions to the radial equations from the governing scalar and vector wave equations (in acoustics and electromagnetic theory, respectively). Until relatively recently, researchers assumed a constant refractive index throughout the medium of interest. However, the most interesting and increasingly useful cases are those with non-constant refractive index profiles. In the majority of this dissertation the focus is on media with piecewise constant refractive indices in radially symmetric media. The method discussed is based on the solution of Maxwell's equations for scattering of plane electromagnetic waves from a dielectric (or "transparent") sphere in terms of the related Helmholtz equation. The main body of the dissertation (Chapters 2 and 3) is concerned with scattering from (i) a uniform spherical inhomogeneity embedded in an external medium with different properties, and (ii) a piecewise-uniform central inhomogeneity in the external medium. The latter results contain a natural generalization of the former (previously known) results. The link with time-independent quantum mechanical scattering, via morphology-dependent resonances (MDRs), is discussed in Chapter 2. This requires a generalization of the classical problem for scattering of a plane wave from a uniform spherically-symmetric inhomogeneity (in which the velocity of propagation is a function only of the radial coordinate r. i.e.. c = c(r)) to a piecewise-uniform inhomogeneity. In Chapter 3 the Jost-function formulation of potential scattering theory is used to solve the radial differential equation for scattering which can be converted into an integral equation corresponding via the Jost boundary conditions. The first two iterations for the zero angular momentum case l = 0 are provided for both two-layer and three-layer models. It is found that the iterative technique is most useful for long wavelengths and sufficiently small ratios of interior and exterior wavenumbers. Exact solutions are also provided for these cases. In Chapter 4 the time-independent quantum mechanical 'connection' is exploited further by generalizing previous work on a spherical well potential to the case where a delta 'function' potential is appended to the exterior of the well (for l ≠ 0). This corresponds to an idealization of the former approach to the case of a 'coated sphere'. The poles of the associated 'S-matrix' are important in this regard, since they correspond directly with the morphology-dependent resonances discussed in Chapter 2. These poles (for the l = 0 case, to compare with Nussenzveig's analysis) are tracked in the complex wavenumber plane as the strength of the delta function potential changes. Finally, a set of 4 Appendices is provided to clarify some of the connections between (i) the scattering of acoustic/electromagnetic waves from a penetrable/dielectric sphere and (ii) time-independent potential scattering theory in quantum mechanics. This, it is hoped, will be the subject of future work.
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-01-01
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-12-07
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad
2015-06-01
Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Modeling the human as a controller in a multitask environment
NASA Technical Reports Server (NTRS)
Govindaraj, T.; Rouse, W. B.
1978-01-01
Modeling the human as a controller of slowly responding systems with preview is considered. Along with control tasks, discrete noncontrol tasks occur at irregular intervals. In multitask situations such as these, it has been observed that humans tend to apply piecewise constant controls. It is believed that the magnitude of controls and the durations for which they remain constant are dependent directly on the system bandwidth, preview distance, complexity of the trajectory to be followed, and nature of the noncontrol tasks. A simple heuristic model of human control behavior in this situation is presented. The results of a simulation study, whose purpose was determination of the sensitivity of the model to its parameters, are discussed.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-01-01
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Parameterizations for ensemble Kalman inversion
NASA Astrophysics Data System (ADS)
Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.
2018-05-01
The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.
Primal-mixed formulations for reaction-diffusion systems on deforming domains
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo
2015-10-01
We propose a finite element formulation for a coupled elasticity-reaction-diffusion system written in a fully Lagrangian form and governing the spatio-temporal interaction of species inside an elastic, or hyper-elastic body. A primal weak formulation is the baseline model for the reaction-diffusion system written in the deformed domain, and a finite element method with piecewise linear approximations is employed for its spatial discretization. On the other hand, the strain is introduced as mixed variable in the equations of elastodynamics, which in turn acts as coupling field needed to update the diffusion tensor of the modified reaction-diffusion system written in a deformed domain. The discrete mechanical problem yields a mixed finite element scheme based on row-wise Raviart-Thomas elements for stresses, Brezzi-Douglas-Marini elements for displacements, and piecewise constant pressure approximations. The application of the present framework in the study of several coupled biological systems on deforming geometries in two and three spatial dimensions is discussed, and some illustrative examples are provided and extensively analyzed.
Computer Models of Underwater Acoustic Propagation.
1980-01-02
deterministic propagation loss result. Development of a model for the more general problem is required, as evidenced by the trends in future sonar designs ...air. The water column itself is treated as an ideal fluid incapable of supporting showr stresses and having a uniform or, at most, piecewise constant...evaluated at any depth (zs 4 z -zN). The layer in which the source is located will be designated by LS and the receiver layer by LR. The depth dependent
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.
NASA Astrophysics Data System (ADS)
Loridan, V.; Ripoll, J. F.; De Vuyst, F.
2017-12-01
Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.
Electrostatic forces in the Poisson-Boltzmann systems
NASA Astrophysics Data System (ADS)
Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2013-09-01
Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.
NASA Astrophysics Data System (ADS)
Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.
2018-05-01
The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
Generalized cable equation model for myelinated nerve fiber.
Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph
2005-10-01
Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.
Direct AUC optimization of regulatory motifs.
Zhu, Lin; Zhang, Hong-Bo; Huang, De-Shuang
2017-07-15
The discovery of transcription factor binding site (TFBS) motifs is essential for untangling the complex mechanism of genetic variation under different developmental and environmental conditions. Among the huge amount of computational approaches for de novo identification of TFBS motifs, discriminative motif learning (DML) methods have been proven to be promising for harnessing the discovery power of accumulated huge amount of high-throughput binding data. However, they have to sacrifice accuracy for speed and could fail to fully utilize the information of the input sequences. We propose a novel algorithm called CDAUC for optimizing DML-learned motifs based on the area under the receiver-operating characteristic curve (AUC) criterion, which has been widely used in the literature to evaluate the significance of extracted motifs. We show that when the considered AUC loss function is optimized in a coordinate-wise manner, the cost function of each resultant sub-problem is a piece-wise constant function, whose optimal value can be found exactly and efficiently. Further, a key step of each iteration of CDAUC can be efficiently solved as a computational geometry problem. Experimental results on real world high-throughput datasets illustrate that CDAUC outperforms competing methods for refining DML motifs, while being one order of magnitude faster. Meanwhile, preliminary results also show that CDAUC may also be useful for improving the interpretability of convolutional kernels generated by the emerging deep learning approaches for predicting TF sequences specificities. CDAUC is available at: https://drive.google.com/drive/folders/0BxOW5MtIZbJjNFpCeHlBVWJHeW8 . dshuang@tongji.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
SINGER, A.; GILLESPIE, D.; NORBURY, J.; EISENBERG, R. S.
2009-01-01
Ion channels are proteins with a narrow hole down their middle that control a wide range of biological function by controlling the flow of spherical ions from one macroscopic region to another. Ion channels do not change their conformation on the biological time scale once they are open, so they can be described by a combination of Poisson and drift-diffusion (Nernst–Planck) equations called PNP in biophysics. We use singular perturbation techniques to analyse the steady-state PNP system for a channel with a general geometry and a piecewise constant permanent charge profile. We construct an outer solution for the case of a constant permanent charge density in three dimensions that is also a valid solution of the one-dimensional system. The asymptotical current–voltage (I–V ) characteristic curve of the device (obtained by the singular perturbation analysis) is shown to be a very good approximation of the numerical I–V curve (obtained by solving the system numerically). The physical constraint of non-negative concentrations implies a unique solution, i.e., for each given applied potential there corresponds a unique electric current (relaxing this constraint yields non-physical multiple solutions for sufficiently large voltages). PMID:19809600
A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems
NASA Astrophysics Data System (ADS)
Liu, Zuolin; Xu, Jian
2018-04-01
In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.
High-Speed Numeric Function Generator Using Piecewise Quadratic Approximations
2007-09-01
application; User specifies the fuction to approxiamte. % % This programs turns the function provided into an inline function... PRIMARY = < primary file 1> < primary file 2> #SECONDARY = <secondary file 1> <secondary file 2> #CHIP2 = <file to compile to user chip
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Gradient-based controllers for timed continuous Petri nets
NASA Astrophysics Data System (ADS)
Lefebvre, Dimitri; Leclercq, Edouard; Druaux, Fabrice; Thomas, Philippe
2015-07-01
This paper is about control design for timed continuous Petri nets that are described as piecewise affine systems. In this context, the marking vector is considered as the state space vector, weighted marking of place subsets are defined as the model outputs and the model inputs correspond to multiplicative control actions that slow down the firing rate of some controllable transitions. Structural and functional sensitivity of the outputs with respect to the inputs are discussed in terms of Petri nets. Then, gradient-based controllers (GBC) are developed in order to adapt the control actions of the controllable transitions according to desired trajectories of the outputs.
NASA Astrophysics Data System (ADS)
Admal, Nikhil Chandra; Po, Giacomo; Marian, Jaime
2017-12-01
The standard way of modeling plasticity in polycrystals is by using the crystal plasticity model for single crystals in each grain, and imposing suitable traction and slip boundary conditions across grain boundaries. In this fashion, the system is modeled as a collection of boundary-value problems with matching boundary conditions. In this paper, we develop a diffuse-interface crystal plasticity model for polycrystalline materials that results in a single boundary-value problem with a single crystal as the reference configuration. Using a multiplicative decomposition of the deformation gradient into lattice and plastic parts, i.e. F( X,t)= F L( X,t) F P( X,t), an initial stress-free polycrystal is constructed by imposing F L to be a piecewise constant rotation field R 0( X), and F P= R 0( X)T, thereby having F( X,0)= I, and zero elastic strain. This model serves as a precursor to higher order crystal plasticity models with grain boundary energy and evolution.
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Tracking Simulation of Third-Integer Resonant Extraction for Fermilab's Mu2e Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chong Shik; Amundson, James; Michelotti, Leo
2015-02-13
The Mu2e experiment at Fermilab requires acceleration and transport of intense proton beams in order to deliver stable, uniform particle spills to the production target. To meet the experimental requirement, particles will be extracted slowly from the Delivery Ring to the external beamline. Using Synergia2, we have performed multi-particle tracking simulations of third-integer resonant extraction in the Delivery Ring, including space charge effects, physical beamline elements, and apertures. A piecewise linear ramp profile of tune quadrupoles was used to maintain a constant averaged spill rate throughout extraction. To study and minimize beam losses, we implemented and introduced a number ofmore » features, beamline element apertures, and septum plane alignments. Additionally, the RF Knockout (RFKO) technique, which excites particles transversely, is employed for spill regulation. Combined with a feedback system, it assists in fine-tuning spill uniformity. Simulation studies were carried out to optimize the RFKO feedback scheme, which will be helpful in designing the final spill regulation system.« less
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Estimation of variance in Cox's regression model with shared gamma frailties.
Andersen, P K; Klein, J P; Knudsen, K M; Tabanera y Palacios, R
1997-12-01
The Cox regression model with a shared frailty factor allows for unobserved heterogeneity or for statistical dependence between the observed survival times. Estimation in this model when the frailties are assumed to follow a gamma distribution is reviewed, and we address the problem of obtaining variance estimates for regression coefficients, frailty parameter, and cumulative baseline hazards using the observed nonparametric information matrix. A number of examples are given comparing this approach with fully parametric inference in models with piecewise constant baseline hazards.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Deconvolution of noisy transient signals: a Kalman filtering application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Zicker, J.E.
The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.
2013-04-22
Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with Nominal input L1 ‐based control generator This L1 adaptive control architecture uses data from the reference model
Consensus-Based Formation Control of a Class of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Joshi, Suresh; Gonzalez, Oscar R.
2014-01-01
This paper presents a consensus-based formation control scheme for autonomous multi-agent systems represented by double integrator dynamics. Assuming that the information graph topology consists of an undirected connected graph, a leader-based consensus-type control law is presented and shown to provide asymptotic formation stability when subjected to piecewise constant formation velocity commands. It is also shown that global asymptotic stability is preserved in the presence of (0, infinity)- sector monotonic non-decreasing actuator nonlinearities.
Locally Contractive Dynamics in Generalized Integrate-and-Fire Neurons*
Jimenez, Nicolas D.; Mihalas, Stefan; Brown, Richard; Niebur, Ernst; Rubin, Jonathan
2013-01-01
Integrate-and-fire models of biological neurons combine differential equations with discrete spike events. In the simplest case, the reset of the neuronal voltage to its resting value is the only spike event. The response of such a model to constant input injection is limited to tonic spiking. We here study a generalized model in which two simple spike-induced currents are added. We show that this neuron exhibits not only tonic spiking at various frequencies but also the commonly observed neuronal bursting. Using analytical and numerical approaches, we show that this model can be reduced to a one-dimensional map of the adaptation variable and that this map is locally contractive over a broad set of parameter values. We derive a sufficient analytical condition on the parameters for the map to be globally contractive, in which case all orbits tend to a tonic spiking state determined by the fixed point of the return map. We then show that bursting is caused by a discontinuity in the return map, in which case the map is piecewise contractive. We perform a detailed analysis of a class of piecewise contractive maps that we call bursting maps and show that they robustly generate stable bursting behavior. To the best of our knowledge, this work is the first to point out the intimate connection between bursting dynamics and piecewise contractive maps. Finally, we discuss bifurcations in this return map, which cause transitions between spiking patterns. PMID:24489486
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2015-11-01
The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Design of efficient stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Majumder, D. K.; Thornton, W. A.
1976-01-01
A method to produce efficient piecewise uniform stiffened shells of revolution is presented. The approach uses a first order differential equation formulation for the shell prebuckling and buckling analyses and the necessary conditions for an optimum design are derived by a variational approach. A variety of local yielding and buckling constraints and the general buckling constraint are included in the design process. The local constraints are treated by means of an interior penalty function and the general buckling load is treated by means of an exterior penalty function. This allows the general buckling constraint to be included in the design process only when it is violated. The self-adjoint nature of the prebuckling and buckling formulations is used to reduce the computational effort. Results for four conical shells and one spherical shell are given.
NASA Astrophysics Data System (ADS)
Westphal, T.; Nijssen, R. P. L.
2014-12-01
The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.
NASA Astrophysics Data System (ADS)
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.
Unsteady flows in rotor-stator cascades
NASA Astrophysics Data System (ADS)
Lee, Yu-Tai; Bein, Thomas W.; Feng, Jin Z.; Merkle, Charles L.
1991-03-01
A time-accurate potential-flow calculation method has been developed for unsteady incompressible flows through two-dimensional multi-blade-row linear cascades. The method represents the boundary surfaces by distributing piecewise linear-vortex and constant-source singularities on discrete panels. A local coordinate is assigned to each independently moving object. Blade-shed vorticity is traced at each time step. The unsteady Kutta condition applied is nonlinear and requires zero blade trailing-edge loading at each time. Its influence on the solutions depends on the blade trailing-edge shapes. Steady biplane and cascade solutions are presented and compared to exact solutions and experimental data. Unsteady solutions are validated with the Wagner function for an airfoil moving impulsively from rest and the Theodorsen function for an oscillating airfoil. The shed vortex motion and its interaction with blades are calculated and compared to an analytic solution. For multi-blade-row cascade, the potential effect between blade rows is predicted using steady and quasi unsteady calculations. The accuracy of the predictions is demonstrated using experimental results for a one-stage turbine stator-rotor.
General solution of the Bagley-Torvik equation with fractional-order derivative
NASA Astrophysics Data System (ADS)
Wang, Z. H.; Wang, X.
2010-05-01
This paper investigates the general solution of the Bagley-Torvik equation with 1/2-order derivative or 3/2-order derivative. This fractional-order differential equation is changed into a sequential fractional-order differential equation (SFDE) with constant coefficients. Then the general solution of the SFDE is expressed as the linear combination of fundamental solutions that are in terms of α-exponential functions, a kind of functions that play the same role of the classical exponential function. Because the number of fundamental solutions of the SFDE is greater than 2, the general solution of the SFDE depends on more than two free (independent) constants. This paper shows that the general solution of the Bagley-Torvik equation involves actually two free constants only, and it can be determined fully by the initial displacement and initial velocity.
NASA Astrophysics Data System (ADS)
Aban, C. J. G.; Bacolod, R. O.; Confesor, M. N. P.
2015-06-01
A The White Noise Path Integral Approach is used in evaluating the B-cell density or the number of B-cell per unit volume for a basic type of immune system response based on the modeling done by Perelson and Wiegel. From the scaling principles of Perelson [1], the B- cell density is obtained where antigens and antibodies mutates and activation function f(|S-SA|) is defined describing the interaction between a specific antigen and a B-cell. If the activation function f(|S-SA|) is held constant, the major form of the B-cell density evaluated using white noise analysis is similar to the form of the B-cell density obtained by Perelson and Wiegel using a differential approach.A piecewise linear functionis also used to describe the activation f(|S-SA|). If f(|S-SA|) is zero, the density decreases exponentially. If f(|S-SA|) = S-SA-SB, the B- cell density increases exponentially until it reaches a certain maximum value. For f(|S-SA|) = 2SA-SB-S, the behavior of B-cell density is oscillating and remains to be in small values.
NASA Astrophysics Data System (ADS)
Yasui, Kyuichi; Mimura, Ken-ichi; Izu, Noriya; Kato, Kazumi
2018-03-01
The dielectric constant of an ordered assembly of BaTiO3 nanocubes is numerically calculated as a function of temperature assuming a distribution of tilt angles of attached nanocubes. As the phase transition temperature from the tetragonal crystal structure to the cubic crystal structure of a BaTiO3 nanocube decreases as the tilt angle increases, the temperature at the peak of the dielectric constant of an ordered assembly is considerably lower than the Curie temperature of a free-standing BaTiO3 crystal. The peak of the dielectric constant as a function of temperature for an ordered assembly becomes considerably broader than that for a single crystal owing to the contribution of nanocubes with various tilt angles.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Cheng, Kung-Shan; Yuan, Yu; Li, Zhen; Stauffer, Paul R; Maccarini, Paolo; Joines, William T; Dewhirst, Mark W; Das, Shiva K
2009-04-07
In large multi-antenna systems, adaptive controllers can aid in steering the heat focus toward the tumor. However, the large number of sources can greatly increase the steering time. Additionally, controller performance can be degraded due to changes in tissue perfusion which vary non-linearly with temperature, as well as with time and spatial position. The current work investigates whether a reduced-order controller with the assumption of piecewise constant perfusion is robust to temperature-dependent perfusion and achieves steering in a shorter time than required by a full-order controller. The reduced-order controller assumes that the optimal heating setting lies in a subspace spanned by the best heating vectors (virtual sources) of an initial, approximate, patient model. An initial, approximate, reduced-order model is iteratively updated by the controller, using feedback thermal images, until convergence of the heat focus to the tumor. Numerical tests were conducted in a patient model with a right lower leg sarcoma, heated in a 10-antenna cylindrical mini-annual phased array applicator operating at 150 MHz. A half-Gaussian model was used to simulate temperature-dependent perfusion. Simulated magnetic resonance temperature images were used as feedback at each iteration step. Robustness was validated for the controller, starting from four approximate initial models: (1) a 'standard' constant perfusion lower leg model ('standard' implies a model that exactly models the patient with the exception that perfusion is considered constant, i.e., not temperature dependent), (2) a model with electrical and thermal tissue properties varied from 50% higher to 50% lower than the standard model, (3) a simplified constant perfusion pure-muscle lower leg model with +/-50% deviated properties and (4) a standard model with the tumor position in the leg shifted by 1.5 cm. Convergence to the desired focus of heating in the tumor was achieved for all four simulated models. The controller accomplished satisfactory therapeutic outcomes: approximately 80% of the tumor was heated to temperatures 43 degrees C and approximately 93% was maintained at temperatures <41 degrees C. Compared to the controller without model reduction, a approximately 9-25 fold reduction in convergence time was accomplished using approximately 2-3 orthonormal virtual sources. In the situations tested, the controller was robust to the presence of temperature-dependent perfusion. The results of this work can help to lay the foundation for real-time thermal control of multi-antenna hyperthermia systems in clinical situations where perfusion can change rapidly with temperature.
Microwave moisture sensing through use of a piecewise density-independent function
USDA-ARS?s Scientific Manuscript database
Microwave moisture sensing provides a means to determine nondestructively the amount of water in materials. This is accomplished through the correlation of dielectric properties with moisture in the material. In this study, linear relationships between a density-independent function of the dielectri...
Visibility graphs and symbolic dynamics
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Just, Wolfram
2018-07-01
Visibility algorithms are a family of geometric and ordering criteria by which a real-valued time series of N data is mapped into a graph of N nodes. This graph has been shown to often inherit in its topology nontrivial properties of the series structure, and can thus be seen as a combinatorial representation of a dynamical system. Here we explore in some detail the relation between visibility graphs and symbolic dynamics. To do that, we consider the degree sequence of horizontal visibility graphs generated by the one-parameter logistic map, for a range of values of the parameter for which the map shows chaotic behaviour. Numerically, we observe that in the chaotic region the block entropies of these sequences systematically converge to the Lyapunov exponent of the time series. Hence, Pesin's identity suggests that these block entropies are converging to the Kolmogorov-Sinai entropy of the physical measure, which ultimately suggests that the algorithm is implicitly and adaptively constructing phase space partitions which might have the generating property. To give analytical insight, we explore the relation k(x) , x ∈ [ 0 , 1 ] that, for a given datum with value x, assigns in graph space a node with degree k. In the case of the out-degree sequence, such relation is indeed a piece-wise constant function. By making use of explicit methods and tools from symbolic dynamics we are able to analytically show that the algorithm indeed performs an effective partition of the phase space and that such partition is naturally expressed as a countable union of subintervals, where the endpoints of each subinterval are related to the fixed point structure of the iterates of the map and the subinterval enumeration is associated with particular ordering structures that we called motifs.
Second order Method for Solving 3D Elasticity Equations with Complex Interfaces
Wang, Bao; Xia, Kelin; Wei, Guo-Wei
2015-01-01
Elastic materials are ubiquitous in nature and indispensable components in man-made devices and equipments. When a device or equipment involves composite or multiple elastic materials, elasticity interface problems come into play. The solution of three dimensional (3D) elasticity interface problems is significantly more difficult than that of elliptic counterparts due to the coupled vector components and cross derivatives in the governing elasticity equation. This work introduces the matched interface and boundary (MIB) method for solving 3D elasticity interface problems. The proposed MIB elasticity interface scheme utilizes fictitious values on irregular grid points near the material interface to replace function values in the discretization so that the elasticity equation can be discretized using the standard finite difference schemes as if there were no material interface. The interface jump conditions are rigorously enforced on the intersecting points between the interface and the mesh lines. Such an enforcement determines the fictitious values. A number of new techniques has been developed to construct efficient MIB elasticity interface schemes for dealing with cross derivative in coupled governing equations. The proposed method is extensively validated over both weak and strong discontinuity of the solution, both piecewise constant and position-dependent material parameters, both smooth and nonsmooth interface geometries, and both small and large contrasts in the Poisson’s ratio and shear modulus across the interface. Numerical experiments indicate that the present MIB method is of second order convergence in both L∞ and L2 error norms for handling arbitrarily complex interfaces, including biomolecular surfaces. To our best knowledge, this is the first elasticity interface method that is able to deliver the second convergence for the molecular surfaces of proteins.. PMID:25914422
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product,more » the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.« less
NASA Astrophysics Data System (ADS)
Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu
2015-12-01
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.
Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared Region.
1979-12-31
exponent of the double exponential function were ’bumpy’ for some cases. Since the nature of the transmittance does not predict this behavior, we...T ,IS RECOMPUTED FOR THE ORIGIONAL DATA *USING THE PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, *’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU
Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared.
1980-04-01
predict this behavior, we conclude that the first method using linear function of x is accurate enough to be used in the actual application. The...PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, * ’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU AND THE RECOMPUTED’, * ’ TAU VALUES ARE COMPUTED.’////) 77
A Unified Theory for the Great Plains Nocturnal Low-Level Jet
NASA Astrophysics Data System (ADS)
Shapiro, A.; Fedorovich, E.; Rahimi, S.
2014-12-01
The nocturnal low-level jet (LLJ) is a warm-season atmospheric boundary layer phenomenon common to the Great Plains of the United States and other places worldwide, typically in regions east of mountain ranges. Low-level jets develop around sunset in fair weather conditions conducive to strong radiational cooling, reach peak intensity in the pre-dawn hours, and then dissipate with the onset of daytime convective mixing. In this study we consider the LLJ as a diurnal oscillation of a stably stratified atmosphere overlying a planar slope on the rotating Earth. The oscillations arise from diurnal cycles in both the heating of the slope (mechanism proposed by Holton in 1967) and the turbulent mixing (mechanism proposed by Blackadar in 1957). The governing equations are the equations of motion, incompressibility condition, and thermal energy in the Boussinesq approximation, with turbulent heat and momentum exchange parameterized through spatially constant but diurnally varying turbulent diffusion coefficients (diffusivities). Analytical solutions are obtained for diffusivities with piecewise constant waveforms (step-changes at sunrise and sunset) and slope temperatures/buoyancies with piecewise linear waveforms (saw-tooth function with minimum at sunrise and maximum before sunset). The jet characteristics are governed by eleven parameters: slope angle, Coriolis parameter, environmental buoyancy frequency, geostrophic wind strength, daytime and nighttime diffusivities, maximum (daytime) and minimum (nighttime) slope buoyancies, duration of daylight, lag time between peak slope buoyancy and sunset, and a Newtonian cooling time scale. An exploration of the parameter space yields results that are broadly consistent with findings particular to the Holton and Blackadar theories, and agree with climatological observations, for example, that stronger jets tend to occur over slopes of 0.15-0.25 degrees characteristic of the Great Plains. The solutions also yield intriguing predictions that peak jet strength increases with attenuation of the minimum surface buoyancy, and that the single most important parameter determining jet height is the nighttime diffusivity, with weaker nightime diffusion associated with smaller jet heights. These and other highlights will be discussed in the presentation.
A variational method for analyzing limit cycle oscillations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; MacLaurin, James
2018-06-01
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .
A piecewise smooth model of evolutionary game for residential mobility and segregation
NASA Astrophysics Data System (ADS)
Radi, D.; Gardini, L.
2018-05-01
The paper proposes an evolutionary version of a Schelling-type dynamic system to model the patterns of residential segregation when two groups of people are involved. The payoff functions of agents are the individual preferences for integration which are empirically grounded. Differently from Schelling's model, where the limited levels of tolerance are the driving force of segregation, in the current setup agents benefit from integration. Despite the differences, the evolutionary model shows a dynamics of segregation that is qualitatively similar to the one of the classical Schelling's model: segregation is always a stable equilibrium, while equilibria of integration exist only for peculiar configurations of the payoff functions and their asymptotic stability is highly sensitive to parameter variations. Moreover, a rich variety of integrated dynamic behaviors can be observed. In particular, the dynamics of the evolutionary game is regulated by a one-dimensional piecewise smooth map with two kink points that is rigorously analyzed using techniques recently developed for piecewise smooth dynamical systems. The investigation reveals that when a stable internal equilibrium exists, the bimodal shape of the map leads to several different kinds of bifurcations, smooth, and border collision, in a complicated interplay. Our global analysis can give intuitions to be used by a social planner to maximize integration through social policies that manipulate people's preferences for integration.
Radial Basis Function Based Quadrature over Smooth Surfaces
2016-03-24
Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29
A method of power analysis based on piecewise discrete Fourier transform
NASA Astrophysics Data System (ADS)
Xin, Miaomiao; Zhang, Yanchi; Xie, Da
2018-04-01
The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.
Hardware Neural Network for a Visual Inspection System
NASA Astrophysics Data System (ADS)
Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji
The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.
Neighboring Optimal Aircraft Guidance in a General Wind Environment
NASA Technical Reports Server (NTRS)
Jardin, Matthew R. (Inventor)
2003-01-01
Method and system for determining an optimal route for an aircraft moving between first and second waypoints in a general wind environment. A selected first wind environment is analyzed for which a nominal solution can be determined. A second wind environment is then incorporated; and a neighboring optimal control (NOC) analysis is performed to estimate an optimal route for the second wind environment. In particular examples with flight distances of 2500 and 6000 nautical miles in the presence of constant or piecewise linearly varying winds, the difference in flight time between a nominal solution and an optimal solution is 3.4 to 5 percent. Constant or variable winds and aircraft speeds can be used. Updated second wind environment information can be provided and used to obtain an updated optimal route.
Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas
2017-03-01
Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
NASA Astrophysics Data System (ADS)
Barucq, H.; Bendali, A.; Fares, M.; Mattesi, V.; Tordeux, S.
2017-02-01
A general symmetric Trefftz Discontinuous Galerkin method is built for solving the Helmholtz equation with piecewise constant coefficients. The construction of the corresponding local solutions to the Helmholtz equation is based on a boundary element method. A series of numerical experiments displays an excellent stability of the method relatively to the penalty parameters, and more importantly its outstanding ability to reduce the instabilities known as the "pollution effect" in the literature on numerical simulations of long-range wave propagation.
The Stiffness Variation of a Micro-Ring Driven by a Traveling Piecewise-Electrode
Li, Yingjie; Yu, Tao; Hu, Yuh-Chung
2014-01-01
In the practice of electrostatically actuated micro devices; the electrostatic force is implemented by sequentially actuated piecewise-electrodes which result in a traveling distributed electrostatic force. However; such force was modeled as a traveling concentrated electrostatic force in literatures. This article; for the first time; presents an analytical study on the stiffness variation of microstructures driven by a traveling piecewise electrode. The analytical model is based on the theory of shallow shell and uniform electrical field. The traveling electrode not only applies electrostatic force on the circular-ring but also alters its dynamical characteristics via the negative electrostatic stiffness. It is known that; when a structure is subjected to a traveling constant force; its natural mode will be resonated as the traveling speed approaches certain critical speeds; and each natural mode refers to exactly one critical speed. However; for the case of a traveling electrostatic force; the number of critical speeds is more than that of the natural modes. This is due to the fact that the traveling electrostatic force makes the resonant frequencies of the forward and backward traveling waves of the circular-ring different. Furthermore; the resonance and stability can be independently controlled by the length of the traveling electrode; though the driving voltage and traveling speed of the electrostatic force alter the dynamics and stabilities of microstructures. This paper extends the fundamental insights into the electromechanical behavior of microstructures driven by electrostatic forces as well as the future development of MEMS/NEMS devices with electrostatic actuation and sensing. PMID:25230308
The stiffness variation of a micro-ring driven by a traveling piecewise-electrode.
Li, Yingjie; Yu, Tao; Hu, Yuh-Chung
2014-09-16
In the practice of electrostatically actuated micro devices; the electrostatic force is implemented by sequentially actuated piecewise-electrodes which result in a traveling distributed electrostatic force. However; such force was modeled as a traveling concentrated electrostatic force in literatures. This article; for the first time; presents an analytical study on the stiffness variation of microstructures driven by a traveling piecewise electrode. The analytical model is based on the theory of shallow shell and uniform electrical field. The traveling electrode not only applies electrostatic force on the circular-ring but also alters its dynamical characteristics via the negative electrostatic stiffness. It is known that; when a structure is subjected to a traveling constant force; its natural mode will be resonated as the traveling speed approaches certain critical speeds; and each natural mode refers to exactly one critical speed. However; for the case of a traveling electrostatic force; the number of critical speeds is more than that of the natural modes. This is due to the fact that the traveling electrostatic force makes the resonant frequencies of the forward and backward traveling waves of the circular-ring different. Furthermore; the resonance and stability can be independently controlled by the length of the traveling electrode; though the driving voltage and traveling speed of the electrostatic force alter the dynamics and stabilities of microstructures. This paper extends the fundamental insights into the electromechanical behavior of microstructures driven by electrostatic forces as well as the future development of MEMS/NEMS devices with electrostatic actuation and sensing.
NASA Technical Reports Server (NTRS)
Kvernadze, George; Hagstrom,Thomas; Shapiro, Henry
1997-01-01
A key step for some methods dealing with the reconstruction of a function with jump discontinuities is the accurate approximation of the jumps and their locations. Various methods have been suggested in the literature to obtain this valuable information. In the present paper, we develop an algorithm based on identities which determine the jumps of a 2(pi)-periodic bounded not-too-highly oscillating function by the partial sums of its differentiated Fourier series. The algorithm enables one to approximate the locations of discontinuities and the magnitudes of jumps of a bounded function. We study the accuracy of approximation and establish asymptotic expansions for the approximations of a 27(pi)-periodic piecewise smooth function with one discontinuity. By an appropriate linear combination, obtained via derivatives of different order, we significantly improve the accuracy. Next, we use Richardson's extrapolation method to enhance the accuracy even more. For a function with multiple discontinuities we establish simple formulae which "eliminate" all discontinuities of the function but one. Then we treat the function as if it had one singularity following the method described above.
NASA Astrophysics Data System (ADS)
Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.
2017-09-01
This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
NASA Astrophysics Data System (ADS)
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Functional Data Approximation on Bounded Domains using Polygonal Finite Elements.
Cao, Juan; Xiao, Yanyang; Chen, Zhonggui; Wang, Wenping; Bajaj, Chandrajit
2018-07-01
We construct and analyze piecewise approximations of functional data on arbitrary 2D bounded domains using generalized barycentric finite elements, and particularly quadratic serendipity elements for planar polygons. We compare approximation qualities (precision/convergence) of these partition-of-unity finite elements through numerical experiments, using Wachspress coordinates, natural neighbor coordinates, Poisson coordinates, mean value coordinates, and quadratic serendipity bases over polygonal meshes on the domain. For a convex n -sided polygon, the quadratic serendipity elements have 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, rather than the usual n ( n + 1)/2 basis functions to achieve quadratic convergence. Two greedy algorithms are proposed to generate Voronoi meshes for adaptive functional/scattered data approximations. Experimental results show space/accuracy advantages for these quadratic serendipity finite elements on polygonal domains versus traditional finite elements over simplicial meshes. Polygonal meshes and parameter coefficients of the quadratic serendipity finite elements obtained by our greedy algorithms can be further refined using an L 2 -optimization to improve the piecewise functional approximation. We conduct several experiments to demonstrate the efficacy of our algorithm for modeling features/discontinuities in functional data/image approximation.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
Multivariate Spline Algorithms for CAGD
NASA Technical Reports Server (NTRS)
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
Piecewise-homotopy analysis method (P-HAM) for first order nonlinear ODE
NASA Astrophysics Data System (ADS)
Chin, F. Y.; Lem, K. H.; Chong, F. S.
2013-09-01
In homotopy analysis method (HAM), the determination for the value of the auxiliary parameter h is based on the valid region of the h-curve in which the horizontal segment of the h-curve will decide the valid h-region. All h-value taken from the valid region, provided that the order of deformation is large enough, will in principle yield an approximation series that converges to the exact solution. However it is found out that the h-value chosen within this valid region does not always promise a good approximation under finite order. This paper suggests an improved method called Piecewise-HAM (P-HAM). In stead of a single h-value, this method suggests using many h-values. Each of the h-values comes from an individual h-curve while each h-curve is plotted by fixing the time t at a different value. Each h-value is claimed to produce a good approximation only about a neighborhood centered at the corresponding t which the h-curve is based on. Each segment of these good approximations is then joined to form the approximation curve. By this, the convergence region is enhanced further. The P-HAM is illustrated and supported by examples.
A generalized analog implementation of piecewise linear neuron models using CCII building blocks.
Soleimani, Hamid; Ahmadi, Arash; Bavandpour, Mohammad; Sharifipoor, Ozra
2014-03-01
This paper presents a set of reconfigurable analog implementations of piecewise linear spiking neuron models using second generation current conveyor (CCII) building blocks. With the same topology and circuit elements, without W/L modification which is impossible after circuit fabrication, these circuits can produce different behaviors, similar to the biological neurons, both for a single neuron as well as a network of neurons just by tuning reference current and voltage sources. The models are investigated, in terms of analog implementation feasibility and costs, targeting large scale hardware implementations. Results show that, in order to gain the best performance, area and accuracy; these models can be compromised. Simulation results are presented for different neuron behaviors with CMOS 350 nm technology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy
2013-11-01
We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Transformations based on continuous piecewise-affine velocity fields
Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...
2017-01-11
Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less
Transformations Based on Continuous Piecewise-Affine Velocity Fields
Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.
2018-01-01
We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517
Nie, Xiaobing; Zheng, Wei Xing
2015-05-01
This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Locomotion of C. elegans: A Piecewise-Harmonic Curvature Representation of Nematode Behavior
Padmanabhan, Venkat; Khan, Zeina S.; Solomon, Deepak E.; Armstrong, Andrew; Rumbaugh, Kendra P.; Vanapalli, Siva A.; Blawzdziewicz, Jerzy
2012-01-01
Caenorhabditis elegans, a free-living soil nematode, displays a rich variety of body shapes and trajectories during its undulatory locomotion in complex environments. Here we show that the individual body postures and entire trails of C. elegans have a simple analytical description in curvature representation. Our model is based on the assumption that the curvature wave is generated in the head segment of the worm body and propagates backwards. We have found that a simple harmonic function for the curvature can capture multiple worm shapes during the undulatory movement. The worm body trajectories can be well represented in terms of piecewise sinusoidal curvature with abrupt changes in amplitude, wavevector, and phase. PMID:22792224
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-18
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less
Effect of speed matching on fundamental diagram of pedestrian flow
NASA Astrophysics Data System (ADS)
Fu, Zhijian; Luo, Lin; Yang, Yue; Zhuang, Yifan; Zhang, Peitong; Yang, Lizhong; Yang, Hongtai; Ma, Jian; Zhu, Kongjin; Li, Yanlai
2016-09-01
Properties of pedestrian may change along their moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study the speed matching effect (a pedestrian adjusts his velocity constantly to the average velocity of his neighbors) and its influence on the density-velocity relationship (a pedestrian adjust his velocity to the surrounding density), known as the fundamental diagram of the pedestrian flow. By the means of the cellular automaton, the simulation results fit well with the empirical data, indicating the great advance of the discrete model for pedestrian dynamics. The results suggest that the system velocity and flow rate increase obviously under a big noise, i.e., a diverse composition of pedestrian crowd, especially in the region of middle or high density. Because of the temporary effect, the speed matching has little influence on the fundamental diagram. Along the entire density, the relationship between the step length and the average pedestrian velocity is a piecewise function combined two linear functions. The number of conflicts reaches the maximum with the pedestrian density of 2.5 m-2, while decreases by 5.1% with the speed matching.
Optimal control of parametric oscillations of compressed flexible bars
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.
High resolution A/D conversion based on piecewise conversion at lower resolution
Terwilliger, Steve [Albuquerque, NM
2012-06-05
Piecewise conversion of an analog input signal is performed utilizing a plurality of relatively lower bit resolution A/D conversions. The results of this piecewise conversion are interpreted to achieve a relatively higher bit resolution A/D conversion without sampling frequency penalty.
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, marco
2010-01-01
The Refined Zigzag Theory (RZT) for homogeneous, laminated composite, and sandwich plates is presented from a multi-scale formalism starting with the inplane displacement field expressed as a superposition of coarse and fine contributions. The coarse kinematic field is that of first-order shear-deformation theory, whereas the fine kinematic field has a piecewise-linear zigzag distribution through the thickness. The condition of limiting homogeneity of transverse-shear properties is proposed and yields four distinct sets of zigzag functions. By examining elastostatic solutions for highly heterogeneous sandwich plates, the best-performing zigzag functions are identified. The RZT predictive capabilities to model homogeneous and highly heterogeneous sandwich plates are critically assessed, demonstrating its superior efficiency, accuracy ; and a wide range of applicability. The present theory, which is derived from the virtual work principle, is well-suited for developing computationally efficient CO-continuous finite elements, and is thus appropriate for the analysis and design of high-performance load-bearing aerospace structures.
Mitigation of epidemics in contact networks through optimal contact adaptation *
Youssef, Mina; Scoglio, Caterina
2013-01-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209
Mitigation of epidemics in contact networks through optimal contact adaptation.
Youssef, Mina; Scoglio, Caterina
2013-08-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.
Construction of Covariance Functions with Variable Length Fields
NASA Technical Reports Server (NTRS)
Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven
2005-01-01
This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Theoretical analysis of nonnuniform skin effects on drawdown variation
NASA Astrophysics Data System (ADS)
Chen, C.-S.; Chang, C. C.; Lee, M. S.
2003-04-01
Under field conditions, the skin zone surrounding the well screen is rarely uniformly distributed in the vertical direction. To understand such non-uniform skin effects on drawdown variation, we assume the skin factor to be an arbitrary, continuous or piece-wise continuous function S_k(z), and incorporate it into a well hydraulics model for constant rate pumping in a homogeneous, vertically anisotropic, confined aquifer. Solutions of depth-specific drawdown and vertical average drawdown are determined by using the Gram-Schmidt method. The non-uniform effects of S_k(z) in vertical average drawdown are averaged out, and can be represented by a constant skin factor S_k. As a result, drawdown of fully penetrating observation wells can be analyzed by appropriate well hydraulics theories assuming a constant skin factor. The S_k is the vertical average value of S_k(z) weighted by the well bore flux q_w(z). In depth-specific drawdown, however, the non-uniform effects of S_k(z) vary with radial and vertical distances, which are under the influence of the vertical profile of S_k(z) and the vertical anisotropy ratio, K_r/K_z. Therefore, drawdown of partially penetrating observation wells may reflect the vertical anisotropy as well as the non-uniformity of the skin zone. The method of determining S_k(z) developed herein involves the use of q_w(z) as can be measured with the borehole flowmeter, and K_r/K_z and S_k as can be determined by the conventional pumping test.
NASA Technical Reports Server (NTRS)
Erickson, Gary E.; Deloach, Richard
2008-01-01
A collection of statistical and mathematical techniques referred to as response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration using data obtained on small-scale models at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. The simulated Mach 3 staging was dominated by multiple shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. This motivated a partitioning of the overall inference space into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using cuboidal and spherical central composite designs capable of fitting full second-order response functions. The primary goal was to approximate the underlying overall aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle using relatively simple, lower-order polynomial functions that were piecewise-continuous across the full independent variable ranges of interest. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. The potential benefits of augmenting the central composite designs to full third order using computer-generated D-optimality criteria were also evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting low-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
Variable horizon in a peridynamic medium
Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo
2015-12-10
Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less
Nonlinear Dynamics of Turbulent Thermals in Shear Flow
NASA Astrophysics Data System (ADS)
Ingel, L. Kh.
2018-03-01
The nonlinear integral model of a turbulent thermal is extended to the case of the horizontal component of its motion relative to the medium (e.g., thermal floating-up in shear flow). In contrast to traditional models, the possibility of a heat source in the thermal is taken into account. For a piecewise constant vertical profile of the horizontal velocity of the medium and a constant vertical velocity shear, analytical solutions are obtained which describe different modes of dynamics of thermals. The nonlinear interaction between the horizontal and vertical components of thermal motion is studied because each of the components influences the rate of entrainment of the surrounding medium, i.e., the growth rate of the thermal size and, hence, its mobility. It is shown that the enhancement of the entrainment of the medium due to the interaction between the thermal and the cross flow can lead to a significant decrease in the mobility of the thermal.
The estimation of material and patch parameters in a PDE-based circular plate model
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.
1995-01-01
The estimation of material and patch parameters for a system involving a circular plate, to which piezoceramic patches are bonded, is considered. A partial differential equation (PDE) model for the thin circular plate is used with the passive and active contributions form the patches included in the internal and external bending moments. This model contains piecewise constant parameters describing the density, flexural rigidity, Poisson ratio, and Kelvin-Voigt damping for the system as well as patch constants and a coefficient for viscous air damping. Examples demonstrating the estimation of these parameters with experimental acceleration data and a variety of inputs to the experimental plate are presented. By using a physically-derived PDE model to describe the system, parameter sets consistent across experiments are obtained, even when phenomena such as damping due to electric circuits affect the system dynamics.
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
NASA Astrophysics Data System (ADS)
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
NASA Technical Reports Server (NTRS)
Fink, P. W.; Khayat, M. A.; Wilton, D. R.
2005-01-01
It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.
Geometric constrained variational calculus I: Piecewise smooth extremals
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2015-05-01
A geometric setup for constrained variational calculus is presented. The analysis deals with the study of the extremals of an action functional defined on piecewise differentiable curves, subject to differentiable, non-holonomic constraints. Special attention is paid to the tensorial aspects of the theory. As far as the kinematical foundations are concerned, a fully covariant scheme is developed through the introduction of the concept of infinitesimal control. The standard classification of the extremals into normal and abnormal ones is discussed, pointing out the existence of an algebraic algorithm assigning to each admissible curve a corresponding abnormality index, related to the co-rank of a suitable linear map. Attention is then shifted to the study of the first variation of the action functional. The analysis includes a revisitation of Pontryagin's equations and of the Lagrange multipliers method, as well as a reformulation of Pontryagin's algorithm in Hamiltonian terms. The analysis is completed by a general result, concerning the existence of finite deformations with fixed endpoints.
NASA Astrophysics Data System (ADS)
Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.
2018-01-01
We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.
Lakshmanan, Shanmugam; Prakash, Mani; Lim, Chee Peng; Rakkiyappan, Rajan; Balasubramaniam, Pagavathigounder; Nahavandi, Saeid
2018-01-01
In this paper, synchronization of an inertial neural network with time-varying delays is investigated. Based on the variable transformation method, we transform the second-order differential equations into the first-order differential equations. Then, using suitable Lyapunov-Krasovskii functionals and Jensen's inequality, the synchronization criteria are established in terms of linear matrix inequalities. Moreover, a feedback controller is designed to attain synchronization between the master and slave models, and to ensure that the error model is globally asymptotically stable. Numerical examples and simulations are presented to indicate the effectiveness of the proposed method. Besides that, an image encryption algorithm is proposed based on the piecewise linear chaotic map and the chaotic inertial neural network. The chaotic signals obtained from the inertial neural network are utilized for the encryption process. Statistical analyses are provided to evaluate the effectiveness of the proposed encryption algorithm. The results ascertain that the proposed encryption algorithm is efficient and reliable for secure communication applications.
Wrinkling of a thin circular sheet bonded to a spherical substrate
Kohn, Robert V.
2017-01-01
We consider a disc-shaped thin elastic sheet bonded to a compliant sphere. (Our sheet can slip along the sphere; the bonding controls only its normal displacement.) If the bonding is stiff (but not too stiff), the geometry of the sphere makes the sheet wrinkle to avoid azimuthal compression. The total energy of this system is the elastic energy of the sheet plus a (Winkler-type) substrate energy. Treating the thickness of the sheet h as a small parameter, we determine the leading-order behaviour of the energy as h tends to zero, and we give (almost matching) upper and lower bounds for the next-order correction. Our analysis of the leading-order behaviour determines the macroscopic deformation of the sheet; in particular, it determines the extent of the wrinkled region, and predicts the (non-trivial) radial strain of the sheet. The leading-order behaviour also provides insight about the length scale of the wrinkling, showing that it must be approximately independent of the distance r from the centre of the sheet (so that the number of wrinkles must increase with r). Our results on the next-order correction provide insight about how the wrinkling pattern should vary with r. Roughly speaking, they suggest that the length scale of wrinkling should not be exactly constant—rather, it should vary slightly, so that the number of wrinkles at radius r can be approximately piecewise constant in its dependence on r, taking values that are integer multiples of h−a with . This article is part of the themed issue ‘Patterning through instabilities in complex media: theory and applications’. PMID:28373380
Analysis of the numerical differentiation formulas of functions with large gradients
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.
2017-10-01
The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.
A method for analyzing clustered interval-censored data based on Cox's model.
Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau
2013-02-28
Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
ON THE BRIGHTNESS AND WAITING-TIME DISTRIBUTIONS OF A TYPE III RADIO STORM OBSERVED BY STEREO/WAVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastwood, J. P.; Hudson, H. S.; Krucker, S.
2010-01-10
Type III solar radio storms, observed at frequencies below {approx}16 MHz by space-borne radio experiments, correspond to the quasi-continuous, bursty emission of electron beams onto open field lines above active regions. The mechanisms by which a storm can persist in some cases for more than a solar rotation whilst exhibiting considerable radio activity are poorly understood. To address this issue, the statistical properties of a type III storm observed by the STEREO/WAVES radio experiment are presented, examining both the brightness distribution and (for the first time) the waiting-time distribution (WTD). Single power-law behavior is observed in the number distribution asmore » a function of brightness; the power-law index is {approx}2.1 and is largely independent of frequency. The WTD is found to be consistent with a piecewise-constant Poisson process. This indicates that during the storm individual type III bursts occur independently and suggests that the storm dynamics are consistent with avalanche-type behavior in the underlying active region.« less
Canards in a minimal piecewise-linear square-wave burster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desroches, M.; Krupa, M.; Fernández-García, S., E-mail: soledad@us.es
We construct a piecewise-linear (PWL) approximation of the Hindmarsh-Rose (HR) neuron model that is minimal, in the sense that the vector field has the least number of linearity zones, in order to reproduce all the dynamics present in the original HR model with classical parameter values. This includes square-wave bursting and also special trajectories called canards, which possess long repelling segments and organise the transitions between stable bursting patterns with n and n + 1 spikes, also referred to as spike-adding canard explosions. We propose a first approximation of the smooth HR model, using a continuous PWL system, and show that itsmore » fast subsystem cannot possess a homoclinic bifurcation, which is necessary to obtain proper square-wave bursting. We then relax the assumption of continuity of the vector field across all zones, and we show that we can obtain a homoclinic bifurcation in the fast subsystem. We use the recently developed canard theory for PWL systems in order to reproduce the spike-adding canard explosion feature of the HR model as studied, e.g., in Desroches et al., Chaos 23(4), 046106 (2013).« less
Viscoelastic Timoshenko Beams with Occasionally Constant Relaxation Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tatar, Nasser-eddine, E-mail: tatarn@kfupm.edu.sa
2012-08-15
For a prescribed desirable arbitrary decay suitable viscoelastic materials are determined through their relaxation functions. It is shown that if we wish to have a decay of order {gamma}(t) then the kernels should be of the same order. That is their product with this function should be summable.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
NASA Astrophysics Data System (ADS)
Jex, Michal; Lotoreichik, Vladimir
2016-02-01
Let Λ ⊂ ℝ2 be a non-closed piecewise-C1 curve, which is either bounded with two free endpoints or unbounded with one free endpoint. Let u±|Λ ∈ L2(Λ) be the traces of a function u in the Sobolev space H1(ℝ2∖Λ) onto two faces of Λ. We prove that for a wide class of shapes of Λ the Schrödinger operator Hω Λ with δ'-interaction supported on Λ of strength ω ∈ L∞(Λ; ℝ) associated with the quadratic form H 1 ( R 2 ∖ Λ ) ∋ u ↦ ∫ R 2 |" separators=" ∇ u | 2 d x - ∫ Λ ω |" separators=" u + | Λ - u - | Λ | 2 d s has no negative spectrum provided that ω is pointwise majorized by a strictly positive function explicitly expressed in terms of Λ. If, additionally, the domain ℝ2∖Λ is quasi-conical, we show that σ ( Hω Λ ) = [ 0 , + ∞ ) . For a bounded curve Λ in our class and non-varying interaction strength ω ∈ ℝ, we derive existence of a constant ω∗ > 0 such that σ ( Hω Λ ) = [ 0 , + ∞ ) for all ω ∈ (-∞, ω∗]; informally speaking, bound states are absent in the weak coupling regime.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod
2010-04-01
For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.
A RUTCOR Project in Discrete Applied Mathematics
1990-02-20
representations of smooth piecewise polynomial functions over triangulated regions have led in particular to the conclusion that Groebner basis methods of...Reversing Number of a Digraph," in preparation. 4. Billera, L.J., and Rose, L.L., " Groebner Basis Methods for Multivariate Splines," RRR 1-89, January
Robust stability of interval bidirectional associative memory neural network with time delays.
Liao, Xiaofeng; Wong, Kwok-wo
2004-04-01
In this paper, the conventional bidirectional associative memory (BAM) neural network with signal transmission delay is intervalized in order to study the bounded effect of deviations in network parameters and external perturbations. The resultant model is referred to as a novel interval dynamic BAM (IDBAM) model. By combining a number of different Lyapunov functionals with the Razumikhin technique, some sufficient conditions for the existence of unique equilibrium and robust stability are derived. These results are fairly general and can be verified easily. To go further, we extend our investigation to the time-varying delay case. Some robust stability criteria for BAM with perturbations of time-varying delays are derived. Besides, our approach for the analysis allows us to consider several different types of activation functions, including piecewise linear sigmoids with bounded activations as well as the usual C1-smooth sigmoids. We believe that the results obtained have leading significance in the design and application of BAM neural networks.
Generalized Scalar-on-Image Regression Models via Total Variation.
Wang, Xiao; Zhu, Hongtu
2017-01-01
The use of imaging markers to predict clinical outcomes can have a great impact in public health. The aim of this paper is to develop a class of generalized scalar-on-image regression models via total variation (GSIRM-TV), in the sense of generalized linear models, for scalar response and imaging predictor with the presence of scalar covariates. A key novelty of GSIRM-TV is that it is assumed that the slope function (or image) of GSIRM-TV belongs to the space of bounded total variation in order to explicitly account for the piecewise smooth nature of most imaging data. We develop an efficient penalized total variation optimization to estimate the unknown slope function and other parameters. We also establish nonasymptotic error bounds on the excess risk. These bounds are explicitly specified in terms of sample size, image size, and image smoothness. Our simulations demonstrate a superior performance of GSIRM-TV against many existing approaches. We apply GSIRM-TV to the analysis of hippocampus data obtained from the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset.
NASA Astrophysics Data System (ADS)
Aioanei, Daniel; Samorì, Bruno; Brucale, Marco
2009-12-01
Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.
Hybrid High-Order methods for finite deformations of hyperelastic materials
NASA Astrophysics Data System (ADS)
Abbas, Mickaël; Ern, Alexandre; Pignet, Nicolas
2018-01-01
We devise and evaluate numerically Hybrid High-Order (HHO) methods for hyperelastic materials undergoing finite deformations. The HHO methods use as discrete unknowns piecewise polynomials of order k≥1 on the mesh skeleton, together with cell-based polynomials that can be eliminated locally by static condensation. The discrete problem is written as the minimization of a broken nonlinear elastic energy where a local reconstruction of the displacement gradient is used. Two HHO methods are considered: a stabilized method where the gradient is reconstructed as a tensor-valued polynomial of order k and a stabilization is added to the discrete energy functional, and an unstabilized method which reconstructs a stable higher-order gradient and circumvents the need for stabilization. Both methods satisfy the principle of virtual work locally with equilibrated tractions. We present a numerical study of the two HHO methods on test cases with known solution and on more challenging three-dimensional test cases including finite deformations with strong shear layers and cavitating voids. We assess the computational efficiency of both methods, and we compare our results to those obtained with an industrial software using conforming finite elements and to results from the literature. The two HHO methods exhibit robust behavior in the quasi-incompressible regime.
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
Self-propulsion of a body with rigid surface and variable coefficient of lift in a perfect fluid
NASA Astrophysics Data System (ADS)
Ramodanov, Sergey M.; Tenenev, Valentin A.; Treschev, Dmitry V.
2012-11-01
We study the system of a 2D rigid body moving in an unbounded volume of incompressible, vortex-free perfect fluid which is at rest at infinity. The body is equipped with a gyrostat and a so-called Flettner rotor. Due to the latter the body is subject to a lifting force (Magnus effect). The rotational velocities of the gyrostat and the rotor are assumed to be known functions of time (control inputs). The equations of motion are presented in the form of the Kirchhoff equations. The integrals of motion are given in the case of piecewise continuous control. Using these integrals we obtain a (reduced) system of first-order differential equations on the configuration space. Then an optimal control problem for several types of the inputs is solved using genetic algorithms.
Constitutive Behavior and Processing Map of T2 Pure Copper Deformed from 293 to 1073 K
NASA Astrophysics Data System (ADS)
Liu, Ying; Xiong, Wei; Yang, Qing; Zeng, Ji-Wei; Zhu, Wen; Sunkulp, Goel
2018-02-01
The deformation behavior of T2 pure copper compressed from 293 to 1073 K with strain rates from 0.01 to 10 s-1 was investigated. The constitutive equations were established by the Arrhenius constitutive model, which can be expressed as a piecewise function of temperature with two sections, in the ranges 293-723 K and 723-1073 K. The processing maps were established according to the dynamic material model for strains of 0.2, 0.4, 0.6, and 0.8, and the optimal processing parameters of T2 copper were determined accordingly. In order to obtain a better understanding of the deformation behavior, the microstructures of the compressed samples were studied by electron back-scattered diffraction. The grains tend to be more refined with decreases in temperature and increases in strain rate.
Spherical type integrable classical systems in a magnetic field
NASA Astrophysics Data System (ADS)
Marchesiello, A.; Šnobl, L.; Winternitz, P.
2018-04-01
We show that four classes of second order spherical type integrable classical systems in a magnetic field exist in the Euclidean space {E}3 , and construct the Hamiltonian and two second order integrals of motion in involution for each of them. For one of the classes the Hamiltonian depends on four arbitrary functions of one variable. This class contains the magnetic monopole as a special case. Two further classes have Hamiltonians depending on one arbitrary function of one variable and four or six constants, respectively. The magnetic field in these cases is radial. The remaining system corresponds to a constant magnetic field and the Hamiltonian depends on two constants. Questions of superintegrability—i.e. the existence of further integrals—are discussed.
Active distribution network planning considering linearized system loss
NASA Astrophysics Data System (ADS)
Li, Xiao; Wang, Mingqiang; Xu, Hao
2018-02-01
In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.
Fast Implicit Methods For Elliptic Moving Interface Problems
2015-12-11
analyzed, and tested for the Fourier transform of piecewise polynomials given on d-dimensional simplices in D-dimensional Euclidean space. These transforms...evaluation, and one to three orders of magnitude slower than the classical uniform Fast Fourier Transform. Second, bilinear quadratures ---which...a fast algorithm was derived, analyzed, and tested for the Fourier transform of pi ecewise polynomials given on d-dimensional simplices in D
NASA Astrophysics Data System (ADS)
Maione, F.; De Pietri, R.; Feo, A.; Löffler, F.
2016-09-01
We present results from three-dimensional general relativistic simulations of binary neutron star coalescences and mergers using public codes. We considered equal mass models where the baryon mass of the two neutron stars is 1.4{M}⊙ , described by four different equations of state (EOS) for the cold nuclear matter (APR4, SLy, H4, and MS1; all parametrized as piecewise polytropes). We started the simulations from four different initial interbinary distances (40,44.3,50, and 60 km), including up to the last 16 orbits before merger. That allows us to show the effects on the gravitational wave (GW) phase evolution, radiated energy and angular momentum due to: the use of different EOS, the orbital eccentricity present in the initial data and the initial separation (in the simulation) between the two stars. Our results show that eccentricity has a major role in the discrepancy between numerical and analytical waveforms until the very last few orbits, where ‘tidal’ effects and missing high-order post-Newtonian coefficients also play a significant role. We test different methods for extrapolating the GW signal extracted at finite radii to null infinity. We show that an effective procedure for integrating the Newman-Penrose {\\psi }4 signal to obtain the GW strain h is to apply a simple high-pass digital filter to h after a time domain integration, where only the two physical motivated integration constants are introduced. That should be preferred to the more common procedures of introducing additional integration constants, integrating in the frequency domain or filtering {\\psi }4 before integration.
Mamey, Mary Rose; Barbosa-Leiker, Celestina; McPherson, Sterling; Burns, G Leonard; Parks, Craig; Roll, John
2015-12-01
Researchers often want to examine 2 comorbid conditions simultaneously. One strategy to do so is through the use of parallel latent growth curve modeling (LGCM). This statistical technique allows for the simultaneous evaluation of 2 disorders to determine the explanations and predictors of change over time. Additionally, a piecewise model can help identify whether there are more than 2 growth processes within each disorder (e.g., during a clinical trial). A parallel piecewise LGCM was applied to self-reported attention-deficit/hyperactivity disorder (ADHD) and self-reported substance use symptoms in 303 adolescents enrolled in cognitive-behavioral therapy treatment for a substance use disorder and receiving either oral-methylphenidate or placebo for ADHD across 16 weeks. Assessing these 2 disorders concurrently allowed us to determine whether elevated levels of 1 disorder predicted elevated levels or increased risk of the other disorder. First, a piecewise growth model measured ADHD and substance use separately. Next, a parallel piecewise LGCM was used to estimate the regressions across disorders to determine whether higher scores at baseline of the disorders (i.e., ADHD or substance use disorder) predicted rates of change in the related disorder. Finally, treatment was added to the model to predict change. While the analyses revealed no significant relationships across disorders, this study explains and applies a parallel piecewise growth model to examine the developmental processes of comorbid conditions over the course of a clinical trial. Strengths of piecewise and parallel LGCMs for other addictions researchers interested in examining dual processes over time are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Constant Communities in Complex Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh
2013-05-01
Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.
Dong, Xingjian; Peng, Zhike; Hua, Hongxing; Meng, Guang
2014-01-01
An efficient spectral element (SE) with electric potential degrees of freedom (DOF) is proposed to investigate the static electromechanical responses of a piezoelectric bimorph for its actuator and sensor functions. A sublayer model based on the piecewise linear approximation for the electric potential is used to describe the nonlinear distribution of electric potential through the thickness of the piezoelectric layers. An equivalent single layer (ESL) model based on first-order shear deformation theory (FSDT) is used to describe the displacement field. The Legendre orthogonal polynomials of order 5 are used in the element interpolation functions. The validity and the capability of the present SE model for investigation of global and local responses of the piezoelectric bimorph are confirmed by comparing the present solutions with those obtained from coupled 3-D finite element (FE) analysis. It is shown that, without introducing any higher-order electric potential assumptions, the current method can accurately describe the distribution of the electric potential across the thickness even for a rather thick bimorph. It is revealed that the effect of electric potential is significant when the bimorph is used as sensor while the effect is insignificant when the bimorph is used as actuator, and therefore, the present study may provide a better understanding of the nonlinear induced electric potential for bimorph sensor and actuator. PMID:24561399
A fast and accurate online sequential learning algorithm for feedforward networks.
Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N
2006-11-01
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.
Theory of Turing Patterns on Time Varying Networks.
Petit, Julien; Lauwens, Ben; Fanelli, Duccio; Carletti, Timoteo
2017-10-06
The process of pattern formation for a multispecies model anchored on a time varying network is studied. A nonhomogeneous perturbation superposed to an homogeneous stable fixed point can be amplified following the Turing mechanism of instability, solely instigated by the network dynamics. By properly tuning the frequency of the imposed network evolution, one can make the examined system behave as its averaged counterpart, over a finite time window. This is the key observation to derive a closed analytical prediction for the onset of the instability in the time dependent framework. Continuously and piecewise constant periodic time varying networks are analyzed, setting the framework for the proposed approach. The extension to nonperiodic settings is also discussed.
NASA Astrophysics Data System (ADS)
Guo, Yongfeng; Shen, Yajun; Tan, Jianguo
2016-09-01
The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.
Hypothalamic stimulation and baroceptor reflex interaction on renal nerve activity.
NASA Technical Reports Server (NTRS)
Wilson, M. F.; Ninomiya, I.; Franz, G. N.; Judy, W. V.
1971-01-01
The basal level of mean renal nerve activity (MRNA-0) measured in anesthetized cats was found to be modified by the additive interaction of hypothalamic and baroceptor reflex influences. Data were collected with the four major baroceptor nerves either intact or cut, and with mean aortic pressure (MAP) either clamped with a reservoir or raised with l-epinephrine. With intact baroceptor nerves, MRNA stayed essentially constant at level MRNA-0 for MAP below an initial pressure P1, and fell approximately linearly to zero as MAP was raised to P2. Cutting the baroceptor nerves kept MRNA at MRNA-0 (assumed to represent basal central neural output) independent of MAP. The addition of hypothalamic stimulation produced nearly constant increments in MRNA for all pressure levels up to P2, with complete inhibition at some level above P2. The increments in MRNA depended on frequency and location of the stimulus. A piecewise linear model describes MRNA as a linear combination of hypothalamic, basal central neural, and baroceptor reflex activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.
2007-02-02
First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms wheremore » the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly decrease. The average relative difference in peak height after piecewise alignment was –0.20%. Piecewise alignment caused the peak areas to either stay the same, slightly increase, or slightly decrease. The average absolute relative difference in area after piecewise alignment was 0.15%.« less
An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles
2012-06-01
Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory
Growth in Reading Performance during the First Four Years in School. Research Report. ETS RR-07-39
ERIC Educational Resources Information Center
Rock, Donald A.
2007-01-01
This study addressed concerns about the potential for differential gains in reading during the first 2 years of formal schooling (K-1) versus the next 2 years of schooling (1st-3rd grade). A multilevel piecewise regression with a node at spring 1st grade was used in order to define separate regressions for the two time periods. Empirical Bayes…
Effect of smoothing on robust chaos.
Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae
2010-08-01
In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.
Linear response formula for piecewise expanding unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Smania, Daniel
2008-04-01
The average R(t)=\\int \\varphi\\,\\rmd \\mu_t of a smooth function phiv with respect to the SRB measure μt of a smooth one-parameter family ft of piecewise expanding interval maps is not always Lipschitz (Baladi 2007 Commun. Math. Phys. 275 839-59, Mazzolena 2007 Master's Thesis Rome 2, Tor Vergata). We prove that if ft is tangent to the topological class of f, and if ∂t ft|t = 0 = X circle f, then R(t) is differentiable at zero, and R'(0) coincides with the resummation proposed (Baladi 2007) of the (a priori divergent) series \\sum_{n=0}^\\infty \\int X(y) \\partial_y (\\varphi \\circ f^n)(y)\\,\\rmd \\mu_0(y) given by Ruelle's conjecture. In fact, we show that t map μt is differentiable within Radon measures. Linear response is violated if and only if ft is transversal to the topological class of f.
Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling
NASA Astrophysics Data System (ADS)
Dobronets, B. S.; Popova, O. A.
2018-05-01
Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.
A novel approach to piecewise analytic agricultural machinery path reconstruction
NASA Astrophysics Data System (ADS)
Wörz, Sascha; Mederle, Michael; Heizinger, Valentin; Bernhardt, Heinz
2017-12-01
Before analysing machinery operation in fields, it has to be coped with the problem that the GPS signals of GPS receivers located on the machines contain measurement noise, are time-discrete, and the underlying physical system describing the positions, axial and absolute velocities, angular rates and angular orientation of the operating machines during the whole working time are unknown. This research work presents a new three-dimensional mathematical approach using kinematic relations based on control variables as Euler angular velocities and angles and a discrete target control problem, such that the state control function is given by the sum of squared residuals involving the state and control variables to get such a physical system, which yields a noise-free and piecewise analytic representation of the positions, velocities, angular rates and angular orientation. It can be used for a further detailed study and analysis of the problem of why agricultural vehicles operate in practice as they do.
Payment contracts in a preventive health care system: a perspective from operations management.
Yaesoubi, Reza; Roberts, Stephen D
2011-12-01
We consider a health care system consisting of two noncooperative parties: a health purchaser (payer) and a health provider, where the interaction between the two parties is governed by a payment contract. We determine the contracts that coordinate the health purchaser-health provider relationship; i.e. the contracts that maximize the population's welfare while allowing each entity to optimize its own objective function. We show that under certain conditions (1) when the number of customers for a preventive medical intervention is verifiable, there exists a gate-keeping contract and a set of concave piecewise linear contracts that coordinate the system, and (2) when the number of customers is not verifiable, there exists a contract of bounded linear form and a set of incentive-feasible concave piecewise linear contracts that coordinate the system. Copyright © 2011 Elsevier B.V. All rights reserved.
Sliding mode control of outbreaks of emerging infectious diseases.
Xiao, Yanni; Xu, Xiaxia; Tang, Sanyi
2012-10-01
This paper proposes and analyzes a mathematical model of an infectious disease system with a piecewise control function concerning threshold policy for disease management strategy. The proposed models extend the classic models by including a piecewise incidence rate to represent control or precautionary measures being triggered once the number of infected individuals exceeds a threshold level. The long-term behaviour of the proposed non-smooth system under this strategy consists of the so-called sliding motion-a very rapid switching between application and interruption of the control action. Model solutions ultimately approach either one of two endemic states for two structures or the sliding equilibrium on the switching surface, depending on the threshold level. Our findings suggest that proper combinations of threshold densities and control intensities based on threshold policy can either preclude outbreaks or lead the number of infected to a previously chosen level.
The Hindmarsh-Rose neuron model: bifurcation analysis and piecewise-linear approximations.
Storace, Marco; Linaro, Daniele; de Lange, Enno
2008-09-01
This paper provides a global picture of the bifurcation scenario of the Hindmarsh-Rose model. A combination between simulations and numerical continuations is used to unfold the complex bifurcation structure. The bifurcation analysis is carried out by varying two bifurcation parameters and evidence is given that the structure that is found is universal and appears for all combinations of bifurcation parameters. The information about the organizing principles and bifurcation diagrams are then used to compare the dynamics of the model with that of a piecewise-linear approximation, customized for circuit implementation. A good match between the dynamical behaviors of the models is found. These results can be used both to design a circuit implementation of the Hindmarsh-Rose model mimicking the diversity of neural response and as guidelines to predict the behavior of the model as well as its circuit implementation as a function of parameters. (c) 2008 American Institute of Physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
San Fabián, J.; Omar, S.; García de la Vega, J. M., E-mail: garcia.delavega@uam.es
The effect of a fraction of Hartree-Fock exchange on the calculated spin-spin coupling constants involving fluorine through a hydrogen bond is analyzed in detail. Coupling constants calculated using wavefunction methods are revisited in order to get high-level calculations using the same basis set. Accurate MCSCF results are obtained using an additive approach. These constants and their contributions are used as a reference for density functional calculations. Within the density functional theory, the Hartree-Fock exchange functional is split in short- and long-range using a modified version of the Coulomb-attenuating method with the SLYP functional as well as with the original B3LYP.more » Results support the difficulties for calculating hydrogen bond coupling constants using density functional methods when fluorine nuclei are involved. Coupling constants are very sensitive to the Hartree-Fock exchange and it seems that, contrary to other properties, it is important to include this exchange for short-range interactions. Best functionals are tested in two different groups of complexes: those related with anionic clusters of type [F(HF){sub n}]{sup −} and those formed by difluoroacetylene and either one or two hydrogen fluoride molecules.« less
Spin-lattice relaxation-rate anomaly at structural phase transitions
NASA Astrophysics Data System (ADS)
Levanyuk, A. P.; Minyukov, S. A.; Etrillard, J.; Toudic, B.
1997-12-01
The theory of spin-lattice relaxation (SLR)-rate anomaly at structural phase transitions proposed about 30 years ago is reconsidered taking into account that knowledge about the relevant lattice response functions has changed considerably. We use both the results of previous authors and perform original calculations of the response functions when it is necessary. We consider displacive systems and use the perturbation theory to treat the lattice anharmonicities in a broad temperature region whenever possible. Some comments about the order-disorder systems are made as well. The possibility of linear coupling of the order parameter and the resonance frequency is always assumed. It is found that in the symmetrical phase the anomaly is due to the one-phonon processes, the anomalous part being proportional to either (T-Tc)-1 or (T-Tc)-1/2 depending on some condition on the soft-mode dispersion. In both cases the value of the SLR rate at the boundary of applicabity of the theory (close to the phase transition) is estimated to be 102-103 times more than the typical value of the SLR rate in an ideal crystal. An essential specific feature of the nonsymmetrical phase is appearance of third-order anharmonicities that are well known to lead to a low-frequency dispersion of the order-parameter damping constant. We have found that this constant exhibits, in addition, a strong wave-vector dispersion, so that the damping constant determing the SLR rate is quite different from that at zero wave vector. In the case of two-component order parameter the damping constant for the component with nonzero equilibrium value is different from that for the other component, the difference is of the same order of magnitude as the damping constants themselves. In the case of the incommensurate phase a part of the mentioned third-order anharmonicity is responsible for longitudinal-transversal interaction that is well known to influence the static longitudinal response function. We calculate as well the dynamic response function to find that for the SLR calculations the imaginary part is of main importance. Due to this interaction the longitudinal SLR rate acquires a dependence on the Larmor frequency. This dependence is however, fairly weak: a logarithmic one. The implications of the obtained results for interpretation of the experimental data on SLR in incommensurate phase are discussed as well.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
Spectral/ hp element methods: Recent developments, applications, and perspectives
NASA Astrophysics Data System (ADS)
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
NASA Astrophysics Data System (ADS)
Zolotaryuk, A. V.
2017-06-01
Several families of one-point interactions are derived from the system consisting of two and three δ-potentials which are regularized by piecewise constant functions. In physical terms such an approximating system represents two or three extremely thin layers separated by some distance. The two-scale squeezing of this heterostructure to one point as both the width of δ-approximating functions and the distance between these functions simultaneously tend to zero is studied using the power parameterization through a squeezing parameter \\varepsilon \\to 0 , so that the intensity of each δ-potential is cj =aj \\varepsilon1-μ , aj \\in {R} , j = 1, 2, 3, the width of each layer l =\\varepsilon and the distance between the layers r = c\\varepsilon^τ , c > 0. It is shown that at some values of the intensities a 1, a 2 and a 3, the transmission across the limit point potentials is non-zero, whereas outside these (resonance) values the one-point interactions are opaque splitting the system at the point of singularity into two independent subsystems. Within the interval 1 < μ < 2 , the resonance sets consist of two curves on the (a_1, a_2) -plane and three surfaces in the (a_1, a_2, a_3) -space. As the parameter μ approaches the value μ =2 , three types of splitting the one-point interactions into countable families are observed.
OpenMEEG: opensource software for quasistatic bioelectromagnetics.
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2010-09-06
Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.
OpenMEEG: opensource software for quasistatic bioelectromagnetics
2010-01-01
Background Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. Methods We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. Results We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. Conclusions This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort. PMID:20819204
New perspectives on constant-roll inflation
NASA Astrophysics Data System (ADS)
Cicciarella, Francesco; Mabillard, Joel; Pieroni, Mauro
2018-01-01
We study constant-roll inflation using the β-function formalism. We show that the constant rate of the inflaton roll is translated into a first order differential equation for the β-function which can be solved easily. The solutions to this equation correspond to the usual constant-roll models. We then construct, by perturbing these exact solutions, more general classes of models that satisfy the constant-roll equation asymptotically. In the case of an asymptotic power law solution, these corrections naturally provide an end to the inflationary phase. Interestingly, while from a theoretical point of view (in particular in terms of the holographic interpretation) these models are intrinsically different from standard slow-roll inflation, they may have phenomenological predictions in good agreement with present cosmological data.
Interstellar photoelectric absorption cross sections, 0.03-10 keV
NASA Technical Reports Server (NTRS)
Morrison, R.; Mccammon, D.
1983-01-01
An effective absorption cross section per hydrogen atom has been calculated as a function of energy in the 0.03-10 keV range using the most recent atomic cross section and cosmic abundance data. Coefficients of a piecewise polynomial fit to the numerical results are given to allow convenient application in automated calculations.
Bifurcation from an invariant to a non-invariant attractor
NASA Astrophysics Data System (ADS)
Mandal, D.
2016-12-01
Switching dynamical systems are very common in many areas of physics and engineering. We consider a piecewise linear map that periodically switches between more than one different functional forms. We show that in such systems it is possible to have a border collision bifurcation where the system transits from an invariant attractor to a non-invariant attractor.
Numerical Recovering of a Speed of Sound by the BC-Method in 3D
NASA Astrophysics Data System (ADS)
Pestov, Leonid; Bolgova, Victoria; Danilin, Alexandr
We develop the numerical algorithm for solving the inverse problem for the wave equation by the Boundary Control method. The problem, which we refer to as a forward one, is an initial boundary value problem for the wave equation with zero initial data in the bounded domain. The inverse problem is to find the speed of sound c(x) by the measurements of waves induced by a set of boundary sources. The time of observation is assumed to be greater then two acoustical radius of the domain. The numerical algorithm for sound reconstruction is based on two steps. The first one is to find a (sufficiently large) number of controls {f_j} (the basic control is defined by the position of the source and some time delay), which generates the same number of known harmonic functions, i.e. Δ {u_j}(.,T) = 0 , where {u_j} is the wave generated by the control {f_j} . After that the linear integral equation w.r.t. the speed of sound is obtained. The piecewise constant model of the speed is used. The result of numerical testing of 3-dimensional model is presented.
Tuning the Fano factor of graphene via Fermi velocity modulation
NASA Astrophysics Data System (ADS)
Lima, Jonas R. F.; Barbosa, Anderson L. R.; Bezerra, C. G.; Pereira, Luiz Felipe C.
2018-03-01
In this work we investigate the influence of a Fermi velocity modulation on the Fano factor of periodic and quasi-periodic graphene superlattices. We consider the continuum model and use the transfer matrix method to solve the Dirac-like equation for graphene where the electrostatic potential, energy gap and Fermi velocity are piecewise constant functions of the position x. We found that in the presence of an energy gap, it is possible to tune the energy of the Fano factor peak and consequently the location of the Dirac point, by a modulation in the Fermi velocity. Hence, the peak of the Fano factor can be used experimentally to identify the Dirac point. We show that for higher values of the Fermi velocity the Fano factor goes below 1/3 at the Dirac point. Furthermore, we show that in periodic superlattices the location of Fano factor peaks is symmetric when the Fermi velocity vA and vB is exchanged, however by introducing quasi-periodicity the symmetry is lost. The Fano factor usually holds a universal value for a specific transport regime, which reveals that the possibility of controlling it in graphene is a notable result.
Brittle failure of rock: A review and general linear criterion
NASA Astrophysics Data System (ADS)
Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan
2018-07-01
A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.
Moving-window dynamic optimization: design of stimulation profiles for walking.
Dosen, Strahinja; Popović, Dejan B
2009-05-01
The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.
Time-Dependent Behavior of Diabase and a Nonlinear Creep Model
NASA Astrophysics Data System (ADS)
Yang, Wendong; Zhang, Qiangyong; Li, Shucai; Wang, Shugang
2014-07-01
Triaxial creep tests were performed on diabase specimens from the dam foundation of the Dagangshan hydropower station, and the typical characteristics of creep curves were analyzed. Based on the test results under different stress levels, a new nonlinear visco-elasto-plastic creep model with creep threshold and long-term strength was proposed by connecting an instantaneous elastic Hooke body, a visco-elasto-plastic Schiffman body, and a nonlinear visco-plastic body in series mode. By introducing the nonlinear visco-plastic component, this creep model can describe the typical creep behavior, which includes the primary creep stage, the secondary creep stage, and the tertiary creep stage. Three-dimensional creep equations under constant stress conditions were deduced. The yield approach index (YAI) was used as the criterion for the piecewise creep function to resolve the difficulty in determining the creep threshold value and the long-term strength. The expression of the visco-plastic component was derived in detail and the three-dimensional central difference form was given. An example was used to verify the credibility of the model. The creep parameters were identified, and the calculated curves were in good agreement with the experimental curves, indicating that the model is capable of replicating the physical processes.
SU-F-T-335: Piecewise Uniform Dose Prescription and Optimization Based On PET/CT Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, G; Liu, J
Purpose: In intensity modulated radiation therapy (IMRT), the tumor target volume is given a uniform dose prescription, which does not consider the heterogeneous characteristics of tumor such as hypoxia, clonogen density, radiosensitivity, tumor proliferation rate and so on. Our goal is to develop a nonuniform target dose prescription method which can spare organs at risk (OARs) better and does not decrease the tumor control probability (TCP). Methods: We propose a piecewise uniform dose prescription (PUDP) based on PET/CT images of tumor. First, we propose to delineate biological target volumes (BTV) and sub-biological target volumes (sub-BTVs) by our Hierarchical Mumford-Shah Vectormore » Model based on PET/CT images of tumor. Then, in order to spare OARs better, we make the BTV mean dose minimized while restrict the TCP to a constant. So, we can get a general formula for determining an optimal dose prescription based on a linearquadratic model (LQ). However, this dose prescription is high heterogeneous, it is very difficult to deliver by IMRT. Therefore we propose to use the equivalent uniform dose (EUD) in each sub-BTV as its final dose prescription, which makes a PUDP for the BTV. Results: We have evaluated the IMRT planning of a patient with nasopharyngeal carcinoma respectively using PUDP and UDP. The results show that the highest and mean doses inside brain stem are 48.425Gy and 19.151Gy respectively when the PUDP is used for IMRT planning, while they are 52.975Gy and 20.0776Gy respectively when the UDP is used. Both of the resulting TCPs(0.9245, 0.9674) are higher than the theoretical TCP(0.8739), when 70Gy is delivered to the BTV. Conclusion: Comparing with the UDP, the PUDP can spare the OARs better while the resulting TCP by PUDP is not significantly lower than by UDP. This work was supported in part by National Natural Science Foundation of China undergrant no.61271382 and by the foundation for construction of scientific project platform forthe cancer hospital of Hunan province.« less
Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods
NASA Astrophysics Data System (ADS)
Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.
2012-03-01
In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.
Integrate and fire neural networks, piecewise contractive maps and limit cycles.
Catsigeras, Eleonora; Guiraud, Pierre
2013-09-01
We study the global dynamics of integrate and fire neural networks composed of an arbitrary number of identical neurons interacting by inhibition and excitation. We prove that if the interactions are strong enough, then the support of the stable asymptotic dynamics consists of limit cycles. We also find sufficient conditions for the synchronization of networks containing excitatory neurons. The proofs are based on the analysis of the equivalent dynamics of a piecewise continuous Poincaré map associated to the system. We show that for efficient interactions the Poincaré map is piecewise contractive. Using this contraction property, we prove that there exist a countable number of limit cycles attracting all the orbits dropping into the stable subset of the phase space. This result applies not only to the Poincaré map under study, but also to a wide class of general n-dimensional piecewise contractive maps.
NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
An analysis of value function learning with piecewise linear control
NASA Astrophysics Data System (ADS)
Tutsoy, Onder; Brown, Martin
2016-05-01
Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.
Mean dyadic Green's function for a two layer random medium
NASA Technical Reports Server (NTRS)
Zuniga, M. A.
1981-01-01
The mean dyadic Green's function for a two-layer random medium with arbitrary three-dimensional correlation functions has been obtained with the zeroth-order solution to the Dyson equation by applying the nonlinear approximation. The propagation of the coherent wave in the random medium is similar to that in an anisotropic medium with different propagation constants for the characteristic transverse electric and transverse magnetic polarizations. In the limit of a laminar structure, two propagation constants for each polarization are found to exist.
Seidu, Issaka; Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2012-03-08
The performance of the second-order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) for the calculation of the exchange coupling constant (J) is assessed by application to a series of triply bridged Cu(II) dinuclear complexes. A comparison of the J values based on SF-CV(2)-DFT with those obtained by the broken symmetry (BS) DFT method and experiment is provided. It is demonstrated that our methodology constitutes a viable alternative to the BS-DFT method. The strong dependence of the calculated exchange coupling constants on the applied functionals is demonstrated. Both SF-CV(2)-DFT and BS-DFT affords the best agreement with experiment for hybrid functionals.
Trajectory fitting in function space with application to analytic modeling of surfaces
NASA Technical Reports Server (NTRS)
Barger, Raymond L.
1992-01-01
A theory for representing a parameter-dependent function as a function trajectory is described. Additionally, a theory for determining a piecewise analytic fit to the trajectory is described. An example is given that illustrates the application of the theory to generating a smooth surface through a discrete set of input cross-section shapes. A simple procedure for smoothing in the parameter direction is discussed, and a computed example is given. Application of the theory to aerodynamic surface modeling is demonstrated by applying it to a blended wing-fuselage surface.
CAFE: A New Relativistic MHD Code
NASA Astrophysics Data System (ADS)
Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.
2015-06-01
We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.
NASA Technical Reports Server (NTRS)
Dame, L. T.; Stouffer, D. C.
1986-01-01
A tool for the mechanical analysis of nickel base single crystal superalloys, specifically Rene N4, used in gas turbine engine components is developed. This is achieved by a rate dependent anisotropic constitutive model implemented in a nonlinear three dimensional finite element code. The constitutive model is developed from metallurigical concepts utilizing a crystallographic approach. A non Schmid's law formulation is used to model the tension/compression asymmetry and orientation dependence in octahedral slip. Schmid's law is a good approximation to the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response, and strain rate sensitivity of these alloys. Methods for deriving the material constants from standard tests are presented. The finite element implementation utilizes an initial strain method and twenty noded isoparametric solid elements. The ability to model piecewise linear load histories is included in the finite element code. The constitutive equations are accurately and economically integrated using a second order Adams-Moulton predictor-corrector method with a dynamic time incrementing procedure. Computed results from the finite element code are compared with experimental data for tensile, creep and cyclic tests at 760 deg C. The strain rate sensitivity and stress relaxation capabilities of the model are evaluated.
NASA Astrophysics Data System (ADS)
Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan
2016-12-01
For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function provides a dynamic process of evolution from the kinetic scale particle free transport to the hydrodynamic scale wave propagation, which provides the physics for the non-equilibrium numerical shock structure construction to the near equilibrium NS solution. As a result, with the implementation of the fifth-order WENO initial reconstruction, in the smooth region the current two-stage GKS provides an accuracy of O ((Δx) 5 ,(Δt) 4) for the Euler equations, and O ((Δx) 5 ,τ2 Δt) for the NS equations, where τ is the time between particle collisions. Many numerical tests, including difficult ones for the Navier-Stokes solvers, have been used to validate the current method. Perfect numerical solutions can be obtained from the high Reynolds number boundary layer to the hypersonic viscous heat conducting flow. Following the two-stage time-stepping framework, the third-order GKS flux function can be used as well to construct a fifth-order method with the usage of both first-order and second-order time derivatives of the flux function. The use of time-accurate flux function may have great advantages on the development of higher-order CFD methods.
Kepner, Gordon R
2014-08-27
This study uses dimensional analysis to derive the general second-order differential equation that underlies numerous physical and natural phenomena described by common mathematical functions. It eschews assumptions about empirical constants and mechanisms. It relies only on the data plot's mathematical properties to provide the conditions and constraints needed to specify a second-order differential equation that is free of empirical constants for each phenomenon. A practical example of each function is analyzed using the general form of the underlying differential equation and the observable unique mathematical properties of each data plot, including boundary conditions. This yields a differential equation that describes the relationship among the physical variables governing the phenomenon's behavior. Complex phenomena such as the Standard Normal Distribution, the Logistic Growth Function, and Hill Ligand binding, which are characterized by data plots of distinctly different sigmoidal character, are readily analyzed by this approach. It provides an alternative, simple, unifying basis for analyzing each of these varied phenomena from a common perspective that ties them together and offers new insights into the appropriate empirical constants for describing each phenomenon.
Nonperturbative Quantum Physics from Low-Order Perturbation Theory.
Mera, Héctor; Pedersen, Thomas G; Nikolić, Branislav K
2015-10-02
The Stark effect in hydrogen and the cubic anharmonic oscillator furnish examples of quantum systems where the perturbation results in a certain ionization probability by tunneling processes. Accordingly, the perturbed ground-state energy is shifted and broadened, thus acquiring an imaginary part which is considered to be a paradigm of nonperturbative behavior. Here we demonstrate how the low order coefficients of a divergent perturbation series can be used to obtain excellent approximations to both real and imaginary parts of the perturbed ground state eigenenergy. The key is to use analytic continuation functions with a built-in singularity structure within the complex plane of the coupling constant, which is tailored by means of Bender-Wu dispersion relations. In the examples discussed the analytic continuation functions are Gauss hypergeometric functions, which take as input fourth order perturbation theory and return excellent approximations to the complex perturbed eigenvalue. These functions are Borel consistent and dramatically outperform widely used Padé and Borel-Padé approaches, even for rather large values of the coupling constant.
Cauchy problem with general discontinuous initial data along a smooth curve for 2-d Euler system
NASA Astrophysics Data System (ADS)
Chen, Shuxing; Li, Dening
2014-09-01
We study the Cauchy problems for the isentropic 2-d Euler system with discontinuous initial data along a smooth curve. All three singularities are present in the solution: shock wave, rarefaction wave and contact discontinuity. We show that the usual restrictive high order compatibility conditions for the initial data are automatically satisfied. The local existence of piecewise smooth solution containing all three waves is established.
Uniformly high-order accurate non-oscillatory schemes, 1
NASA Technical Reports Server (NTRS)
Harten, A.; Osher, S.
1985-01-01
The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws was begun. These schemes share many desirable properties with total variation diminishing schemes (TVD), but TVD schemes have at most first order accuracy, in the sense of truncation error, at extreme of the solution. A uniformly second order approximation was constucted, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
Swimming like algae: biomimetic soft artificial cilia
Sareh, Sina; Rossiter, Jonathan; Conn, Andrew; Drescher, Knut; Goldstein, Raymond E.
2013-01-01
Cilia are used effectively in a wide variety of biological systems from fluid transport to thrust generation. Here, we present the design and implementation of artificial cilia, based on a biomimetic planar actuator using soft-smart materials. This actuator is modelled on the cilia movement of the alga Volvox, and represents the cilium as a piecewise constant-curvature robotic actuator that enables the subsequent direct translation of natural articulation into a multi-segment ionic polymer metal composite actuator. It is demonstrated how the combination of optimal segmentation pattern and biologically derived per-segment driving signals reproduce natural ciliary motion. The amenability of the artificial cilia to scaling is also demonstrated through the comparison of the Reynolds number achieved with that of natural cilia. PMID:23097503
The structure and statistics of interstellar turbulence
NASA Astrophysics Data System (ADS)
Kritsuk, A. G.; Ustyugov, S. D.; Norman, M. L.
2017-06-01
We explore the structure and statistics of multiphase, magnetized ISM turbulence in the local Milky Way by means of driven periodic box numerical MHD simulations. Using the higher order-accurate piecewise-parabolic method on a local stencil (PPML), we carry out a small parameter survey varying the mean magnetic field strength and density while fixing the rms velocity to observed values. We quantify numerous characteristics of the transient and steady-state turbulence, including its thermodynamics and phase structure, kinetic and magnetic energy power spectra, structure functions, and distribution functions of density, column density, pressure, and magnetic field strength. The simulations reproduce many observables of the local ISM, including molecular clouds, such as the ratio of turbulent to mean magnetic field at 100 pc scale, the mass and volume fractions of thermally stable Hi, the lognormal distribution of column densities, the mass-weighted distribution of thermal pressure, and the linewidth-size relationship for molecular clouds. Our models predict the shape of magnetic field probability density functions (PDFs), which are strongly non-Gaussian, and the relative alignment of magnetic field and density structures. Finally, our models show how the observed low rates of star formation per free-fall time are controlled by the multiphase thermodynamics and large-scale turbulence.
NASA Astrophysics Data System (ADS)
Filik, Tansu; Başaran Filik, Ümmühan; Nezih Gerek, Ömer
2017-11-01
In this study, new analytic models are proposed for mapping on-site global solar radiation values to electrical power output values in solar photovoltaic (PV) panels. The model extraction is achieved by simultaneously recording solar radiation and generated power from fixed and tracking panels, each with capacity of 3 kW, in Eskisehir (Turkey) region. It is shown that the relation between the solar radiation and the corresponding electric power is not only nonlinear, but it also exhibits an interesting time-varying characteristic in the form of a hysteresis function. This observed radiation-to-power relation is, then, analytically modelled with three piece-wise function parts (corresponding to morning, noon and evening times), which is another novel contribution of this work. The model is determined for both fixed panels and panels with a tracking system. Especially the panel system with a dynamic tracker produces a harmonically richer (with higher values in general) characteristic, so higher order polynomial models are necessary for the construction of analytical solar radiation models. The presented models, characteristics of the hysteresis functions, and differences in the fixed versus solar-tracking panels are expected to provide valuable insight for further model based researches.
Constant-roll tachyon inflation and observational constraints
NASA Astrophysics Data System (ADS)
Gao, Qing; Gong, Yungui; Fei, Qin
2018-05-01
For the constant-roll tachyon inflation, we derive the analytical expressions for the scalar and tensor power spectra, the scalar and tensor spectral tilts and the tensor to scalar ratio to the first order of epsilon1 by using the method of Bessel function approximation. The derived ns-r results are compared with the observations, we find that only the constant-roll inflation with ηH being a constant is consistent with the observations and observations constrain the constant-roll inflation to be slow-roll inflation. The tachyon potential is also reconstructed for the constant-roll inflation which is consistent with the observations.
The NonConforming Virtual Element Method for the Stokes Equations
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
2016-01-01
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
Numerical solution of the Navier-Stokes equations by discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Krasnov, M. M.; Kuchugov, P. A.; E Ladonkina, M.; E Lutsky, A.; Tishkin, V. F.
2017-02-01
Detailed unstructured grids and numerical methods of high accuracy are frequently used in the numerical simulation of gasdynamic flows in areas with complex geometry. Galerkin method with discontinuous basis functions or Discontinuous Galerkin Method (DGM) works well in dealing with such problems. This approach offers a number of advantages inherent to both finite-element and finite-difference approximations. Moreover, the present paper shows that DGM schemes can be viewed as Godunov method extension to piecewise-polynomial functions. As is known, DGM involves significant computational complexity, and this brings up the question of ensuring the most effective use of all the computational capacity available. In order to speed up the calculations, operator programming method has been applied while creating the computational module. This approach makes possible compact encoding of mathematical formulas and facilitates the porting of programs to parallel architectures, such as NVidia CUDA and Intel Xeon Phi. With the software package, based on DGM, numerical simulations of supersonic flow past solid bodies has been carried out. The numerical results are in good agreement with the experimental ones.
Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2011-11-14
We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics
Stock market context of the Lévy walks with varying velocity
NASA Astrophysics Data System (ADS)
Kutner, Ryszard
2002-11-01
We developed the most general Lévy walks with varying velocity, shorter called the Weierstrass walks (WW) model, by which one can describe both stationary and non-stationary stochastic time series. We considered a non-Brownian random walk where the walker moves, in general, with a velocity that assumes a different constant value between the successive turning points, i.e., the velocity is a piecewise constant function. This model is a kind of Lévy walks where we assume a hierarchical, self-similar in a stochastic sense, spatio-temporal representation of the main quantities such as waiting-time distribution and sojourn probability density (which are principal quantities in the continuous-time random walk formalism). The WW model makes possible to analyze both the structure of the Hurst exponent and the power-law behavior of kurtosis. This structure results from the hierarchical, spatio-temporal coupling between the walker displacement and the corresponding time of the walks. The analysis uses both the fractional diffusion and the super Burnett coefficients. We constructed the diffusion phase diagram which distinguishes regions occupied by classes of different universality. We study only such classes which are characteristic for stationary situations. We thus have a model ready for describing the data presented, e.g., in the form of moving averages; the operation is often used for stochastic time series, especially financial ones. The model was inspired by properties of financial time series and tested for empirical data extracted from the Warsaw stock exchange since it offers an opportunity to study in an unbiased way several features of stock exchange in its early stage.
Piecewise adiabatic following in non-Hermitian cycling
NASA Astrophysics Data System (ADS)
Gong, Jiangbin; Wang, Qing-hai
2018-05-01
The time evolution of periodically driven non-Hermitian systems is in general nonunitary but can be stable. It is hence of considerable interest to examine the adiabatic following dynamics in periodically driven non-Hermitian systems. We show in this work the possibility of piecewise adiabatic following interrupted by hopping between instantaneous system eigenstates. This phenomenon is first observed in a computational model and then theoretically explained, using an exactly solvable model, in terms of the Stokes phenomenon. In the latter case, the piecewise adiabatic following is shown to be a genuine critical behavior and the precise phase boundary in the parameter space is located. Interestingly, the critical boundary for piecewise adiabatic following is found to be unrelated to the domain for exceptional points. To characterize the adiabatic following dynamics, we also advocate a simple definition of the Aharonov-Anandan (AA) phase for nonunitary cyclic dynamics, which always yields real AA phases. In the slow driving limit, the AA phase reduces to the Berry phase if adiabatic following persists throughout the driving without hopping, but oscillates violently and does not approach any limit in cases of piecewise adiabatic following. This work exposes the rich features of nonunitary dynamics in cases of slow cycling and should stimulate future applications of nonunitary dynamics.
Reaction kinetics of resveratrol with tert-butoxyl radicals
NASA Astrophysics Data System (ADS)
Džeba, Iva; Pedzinski, Tomasz; Mihaljević, Branka
2012-09-01
The rate constant for the reaction of t-butoxyl radicals with resveratrol was studied under pseudo-first order conditions. The rate constant was determined by measuring the phenoxyl radical formation rate at 390 nm as function of resveratrol concentration in acetonitrile. The rate constant was determined to be 6.5×108 M-1s-1. This high value indicates the high reactivity consistent with the strong antioxidant activity of resveratrol.
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
Evaluation of the kinetic oxidation of aqueous volatile organic compounds by permanganate.
Mahmoodlu, Mojtaba G; Hassanizadeh, S Majid; Hartog, Niels
2014-07-01
The use of permanganate solutions for in-situ chemical oxidation (ISCO) is a well-established groundwater remediation technology, particularly for targeting chlorinated ethenes. The kinetics of oxidation reactions is an important ISCO remediation design aspect that affects the efficiency and oxidant persistence. The overall rate of the ISCO reaction between oxidant and contaminant is typically described using a second-order kinetic model while the second-order rate constant is determined experimentally by means of a pseudo first order approach. However, earlier studies of chlorinated hydrocarbons have yielded a wide range of values for the second-order rate constants. Also, there is limited insight in the kinetics of permanganate reactions with fuel-derived groundwater contaminants such as toluene and ethanol. In this study, batch experiments were carried out to investigate and compare the oxidation kinetics of aqueous trichloroethylene (TCE), ethanol, and toluene in an aqueous potassium permanganate solution. The overall second-order rate constants were determined directly by fitting a second-order model to the data, instead of typically using the pseudo-first-order approach. The second-order reaction rate constants (M(-1) s(-1)) for TCE, toluene, and ethanol were 8.0×10(-1), 2.5×10(-4), and 6.5×10(-4), respectively. Results showed that the inappropriate use of the pseudo-first-order approach in several previous studies produced biased estimates of the second-order rate constants. In our study, this error was expressed as a function of the extent (P/N) in which the reactant concentrations deviated from the stoichiometric ratio of each oxidation reaction. The error associated with the inappropriate use of the pseudo-first-order approach is negatively correlated with the P/N ratio and reached up to 25% of the estimated second-order rate constant in some previous studies of TCE oxidation. Based on our results, a similar relation is valid for the other volatile organic compounds studied. Copyright © 2013 Elsevier B.V. All rights reserved.
Sardi, Florencia; Manta, Bruno; Portillo-Ledesma, Stephanie; Knoops, Bernard; Comini, Marcelo A; Ferrer-Sueta, Gerardo
2013-04-01
A method based on the differential reactivity of thiol and thiolate with monobromobimane (mBBr) has been developed to measure nucleophilicity and acidity of protein and low-molecular-weight thiols. Nucleophilicity of the thiolate is measured as the pH-independent second-order rate constant of its reaction with mBBr. The ionization constants of the thiols are obtained through the pH dependence of either second-order rate constant or initial rate of reaction. For readily available thiols, the apparent second-order rate constant is measured at different pHs and then plotted and fitted to an appropriate pH function describing the observed number of ionization equilibria. For less available thiols, such as protein thiols, the initial rate of reaction is determined in a wide range of pHs and fitted to the appropriate pH function. The method presented here shows excellent sensitivity, allowing the use of nanomolar concentrations of reagents. The method is suitable for scaling and high-throughput screening. Example determinations of nucleophilicity and pK(a) are presented for captopril and cysteine as low-molecular-weight thiols and for human peroxiredoxin 5 and Trypanosoma brucei monothiol glutaredoxin 1 as protein thiols. Copyright © 2013 Elsevier Inc. All rights reserved.
An unsteady lifting surface method for single rotation propellers
NASA Technical Reports Server (NTRS)
Williams, Marc H.
1990-01-01
The mathematical formulation of a lifting surface method for evaluating the steady and unsteady loads induced on single rotation propellers by blade vibration and inflow distortion is described. The scheme is based on 3-D linearized compressible aerodynamics and presumes that all disturbances are simple harmonic in time. This approximation leads to a direct linear integral relation between the normal velocity on the blade (which is determined from the blade geometry and motion) and the distribution of pressure difference across the blade. This linear relation is discretized by breaking the blade up into subareas (panels) on which the pressure difference is treated as approximately constant, and constraining the normal velocity at one (control) point on each panel. The piece-wise constant loads can then be determined by Gaussian elimination. The resulting blade loads can be used in performance, stability and forced response predictions for the rotor. Mathematical and numerical aspects of the method are examined. A selection of results obtained from the method is presented. The appendices include various details of the derivation that were felt to be secondary to the main development in Section 1.
Metamaterial devices for molding the flow of diffuse light (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wegener, Martin
2016-09-01
Much of optics in the ballistic regime is about designing devices to mold the flow of light. This task is accomplished via specific spatial distributions of the refractive index or the refractive-index tensor. For light propagating in turbid media, a corresponding design approach has not been applied previously. Here, we review our corresponding recent work in which we design spatial distributions of the light diffusivity or the light-diffusivity tensor to accomplish specific tasks. As an application, we realize cloaking of metal contacts on large-area OLEDs, eliminating the contacts' shadows, thereby homogenizing the diffuse light emission. In more detail, metal contacts on large-area organic light-emitting diodes (OLEDs) are mandatory electrically, but they cast optical shadows, leading to unwanted spatially inhomogeneous diffuse light emission. We show that the contacts can be made invisible either by (i) laminate metamaterials designed by coordinate transformations of the diffusion equation or by (ii) triangular-shaped regions with piecewise constant diffusivity, hence constant concentration of scattering centers. These structures are post-optimized in regard to light throughput by Monte-Carlo ray-tracing simulations and successfully validated by model experiments.
NASA Astrophysics Data System (ADS)
Cervellati, R.; Degli Esposti, A.; Lister, D. G.; Lopez, J. C.; Alonso, J. L.
1986-10-01
The microwave spectrum of 2,3-dihydrofuran has been reinvestigated and measurements for the ground and first five excited states of the ring puckering vibration have been extended to higher frequencies and rotational quantum numbers in order to study the vibrational dependence of the rotational and centrifugal distortion constants. The ring puckering potential function derived by Green from the far infrared spectrum does not reproduce the vibrational dependence of the rotational constants well. A slightly different potential function is derived which gives a reasonable fit both to the far infrared spectrum and the rotational constants. This changes the barrier to ring inversion from 1.00 kJ mol -1 to 1.12 kJ mol -1. The vibrational dependence of the centrifugal distortion constants is accounted for satisfactorily by the theory developed by Creswell and Mills. An attempt to reproduce the vibrational dependence of the rotational and centrifugal distortion constants using the ring puckering potential function and a simple model for this vibration has very limited success.
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
Structure Function Scaling Exponent and Intermittency in the Wake of a Wind Turbine Array
NASA Astrophysics Data System (ADS)
Aseyev, Aleksandr; Ali, Naseem; Cal, Raul
2015-11-01
Hot-wire measurements obtained in a 3 × 3 wind turbine array boundary layer are utilized to analyze high order structure functions, intermittency effects as well as the probability density functions of velocity increments at different scales within the energy cascade. The intermittency exponent is found to be greater in the far wake region in comparison to the near wake. At hub height, the intermittency exponent is found to be null. ESS scaling exponents of the second, fourth, and fifth order structure functions remain relatively constant as a function of height in the far-wake whereas in the near-wake these highly affected by the passage of the rotor thus showing a dependence on physical location. When comparing with proposed models, these generally over predict the structure functions in the far wake region. The pdf distributions in the far wake region display wider tails compared to the near wake region, and constant skewness hypothesis based on the local isotropy is verified in the wake. CBET-1034581.
Computation of free oscillations of the earth
Buland, Raymond P.; Gilbert, F.
1984-01-01
Although free oscillations of the Earth may be computed by many different methods, numerous practical considerations have led us to use a Rayleigh-Ritz formulation with piecewise cubic Hermite spline basis functions. By treating the resulting banded matrix equation as a generalized algebraic eigenvalue problem, we are able to achieve great accuracy and generality and a high degree of automation at a reasonable cost. ?? 1984.
NASA Astrophysics Data System (ADS)
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; Prestridge, Katherine; Adrian, Ronald J.
2018-07-01
We introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficient for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. We apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com; Institut Matematik Kejuruteraan; Saaban, Azizan, E-mail: azizan.s@uum.edu.my
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputedmore » data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.« less
Sullivan, Amanda L; Kohli, Nidhi; Farnsworth, Elyse M; Sadeh, Shanna; Jones, Leila
2017-09-01
Accurate estimation of developmental trajectories can inform instruction and intervention. We compared the fit of linear, quadratic, and piecewise mixed-effects models of reading development among students with learning disabilities relative to their typically developing peers. We drew an analytic sample of 1,990 students from the nationally representative Early Childhood Longitudinal Study-Kindergarten Cohort of 1998, using reading achievement scores from kindergarten through eighth grade to estimate three models of students' reading growth. The piecewise mixed-effects models provided the best functional form of the students' reading trajectories as indicated by model fit indices. Results showed slightly different trajectories between students with learning disabilities and without disabilities, with varying but divergent rates of growth throughout elementary grades, as well as an increasing gap over time. These results highlight the need for additional research on appropriate methods for modeling reading trajectories and the implications for students' response to instruction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Comparison of methods for estimating the attributable risk in the context of survival analysis.
Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M
2017-01-23
The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lee, Hong-Tao
1989-01-01
A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.
Class Identification Efficacy in Piecewise GMM with Unknown Turning Points
ERIC Educational Resources Information Center
Ning, Ling; Luo, Wen
2018-01-01
Piecewise GMM with unknown turning points is a new procedure to investigate heterogeneous subpopulations' growth trajectories consisting of distinct developmental phases. Unlike the conventional PGMM, which relies on theory or experiment design to specify turning points a priori, the new procedure allows for an optimal location of turning points…
Forward Field Computation with OpenMEEG
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2011-01-01
To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231
Experiments on Maxwell's fish-eye dynamics in elastic plates
NASA Astrophysics Data System (ADS)
Lefebvre, Gautier; Dubois, Marc; Beauvais, Romain; Achaoui, Younes; Ing, Ros Kiri; Guenneau, Sébastien; Sebbah, Patrick
2015-01-01
We experimentally demonstrate that a Duraluminium thin plate with a thickness profile varying radially in a piecewise constant fashion as h ( r ) = h ( 0 ) ( 1 + (r / R max ) 2 ) 2 , with h(0) = 0.5 mm, h(Rmax) = 2 mm, and Rmax = 10 cm, behaves in many ways as Maxwell's fish-eye lens in optics. Its imaging properties for a Gaussian pulse with central frequencies 30 kHz and 60 kHz are very similar to those predicted by ray trajectories (great circles) on a virtual sphere (rays emanating from the North pole meet at the South pole). However, the refocusing time depends on the carrier frequency as a direct consequence of the dispersive nature of flexural waves in thin plates. Importantly, experimental results are in good agreement with finite-difference-time-domain simulations.
A model of the wall boundary layer for ducted propellers
NASA Technical Reports Server (NTRS)
Eversman, Walter; Moehring, Willi
1987-01-01
The objective of the present study is to include a representation of a wall boundary layer in an existing finite element model of the propeller in the wind tunnel environment. The major consideration is that the new formulation should introduce only modest alterations in the numerical model and should still be capable of producing economical predictions of the radiated acoustic field. This is accomplished by using a stepped approximation in which the velocity profile is piecewise constant in layers. In the limit of infinitesimally thin layers, the velocity profile of the stepped approximation coincides with that of the continuous profile. The approach described here could also be useful in modeling the boundary layer in other duct applications, particularly in the computation of the radiated acoustic field for sources contained in a duct.
Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei
2016-09-01
For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.
Synthetic optimization of air turbine for dental handpieces.
Shi, Z Y; Dong, T
2014-01-01
A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.
NASA Astrophysics Data System (ADS)
Demissie, Taye B.
2017-11-01
The NMR chemical shifts and indirect spin-spin coupling constants of 12 molecules containing 29Si, 73Ge, 119Sn, and 207Pb [X(CCMe)4, Me2X(CCMe)2, and Me3XCCH] are presented. The results are obtained from non-relativistic as well as two- and four-component relativistic density functional theory (DFT) calculations. The scalar and spin-orbit relativistic contributions as well as the total relativistic corrections are determined. The main relativistic effect in these molecules is not due to spin-orbit coupling but rather to the scalar relativistic contraction of the s-shells. The correlation between the calculated and experimental indirect spin-spin coupling constants showed that the four-component relativistic density functional theory (DFT) approach using the Perdew's hybrid scheme exchange-correlation functional (PBE0; using the Perdew-Burke-Ernzerhof exchange and correlation functionals) gives results in good agreement with experimental values. The indirect spin-spin coupling constants calculated using the spin-orbit zeroth order regular approximation together with the hybrid PBE0 functional and the specially designed J-coupling (JCPL) basis sets are in good agreement with the results obtained from the four-component relativistic calculations. For the coupling constants involving the heavy atoms, the relativistic corrections are of the same order of magnitude compared to the non-relativistically calculated results. Based on the comparisons of the calculated results with available experimental values, the best results for all the chemical shifts and non-existing indirect spin-spin coupling constants for all the molecules are reported, hoping that these accurate results will be used to benchmark future DFT calculations. The present study also demonstrates that the four-component relativistic DFT method has reached a level of maturity that makes it a convenient and accurate tool to calculate indirect spin-spin coupling constants of "large" molecular systems involving heavy atoms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aleshin, S. S., E-mail: aless2001@mail.ru; Lobanov, A. E., E-mail: lobanov@phys.msu.ru; Kharlanov, O. G., E-mail: okharl@mail.ru
The effect of flavor day-night asymmetry is considered for solar neutrinos of energy about 1 MeV under the assumption that the electron-density distribution within the Earth is approximately piecewise continuous on the scale of the neutrino-oscillation length. In this approximation, the resulting asymmetry factor for beryllium neutrinos does not depend on the structure of the inner Earth's layers or on the properties of the detector used. Its numerical estimate is on the order of -4 Multiplication-Sign 10{sup -4}, which is far beyond the reach of present-day experiments.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
1986-05-01
neighborhood of the Program PROBE of Noetic Technologies, St. Louis. corners of the domain, place where the type of the boundary condition changes, etc...is studied . , r ° -. o. - *- . ,. .- -*. ... - - . . . ’ , ..- , .- *- , . --s,." . ",-:, "j’ . ], k i-, j!3 ,, :,’ - .A L...Manual. Noetic Technologies Corp., St. Louis, Missouri (1985). 318] Szab’, B. A.: Implementation of a Finite Element Software System with h and p
Lorenzo, C F; Hartley, T T; Malti, R
2013-05-13
A new and simplified method for the solution of linear constant coefficient fractional differential equations of any commensurate order is presented. The solutions are based on the R-function and on specialized Laplace transform pairs derived from the principal fractional meta-trigonometric functions. The new method simplifies the solution of such fractional differential equations and presents the solutions in the form of real functions as opposed to fractional complex exponential functions, and thus is directly applicable to real-world physics.
Analysis of periodically excited non-linear systems by a parametric continuation technique
NASA Astrophysics Data System (ADS)
Padmanabhan, C.; Singh, R.
1995-07-01
The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.
Chen, Kaisheng; Hou, Jie; Huang, Zhuyang; Cao, Tong; Zhang, Jihua; Yu, Yuan; Zhang, Xinliang
2015-02-09
We experimentally demonstrate an all-optical temporal computation scheme for solving 1st- and 2nd-order linear ordinary differential equations (ODEs) with tunable constant coefficients by using Fabry-Pérot semiconductor optical amplifiers (FP-SOAs). By changing the injection currents of FP-SOAs, the constant coefficients of the differential equations are practically tuned. A quite large constant coefficient tunable range from 0.0026/ps to 0.085/ps is achieved for the 1st-order differential equation. Moreover, the constant coefficient p of the 2nd-order ODE solver can be continuously tuned from 0.0216/ps to 0.158/ps, correspondingly with the constant coefficient q varying from 0.0000494/ps(2) to 0.006205/ps(2). Additionally, a theoretical model that combining the carrier density rate equation of the semiconductor optical amplifier (SOA) with the transfer function of the Fabry-Pérot (FP) cavity is exploited to analyze the solving processes. For both 1st- and 2nd-order solvers, excellent agreements between the numerical simulations and the experimental results are obtained. The FP-SOAs based all-optical differential-equation solvers can be easily integrated with other optical components based on InP/InGaAsP materials, such as laser, modulator, photodetector and waveguide, which can motivate the realization of the complicated optical computing on a single integrated chip.
Interaction function of oscillating coupled neurons
Dodla, Ramana; Wilson, Charles J.
2013-01-01
Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210
Bi-cubic interpolation for shift-free pan-sharpening
NASA Astrophysics Data System (ADS)
Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano
2013-12-01
Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.
NASA Astrophysics Data System (ADS)
Cinal, M.
2010-01-01
It is found that for closed-l-shell atoms, the exact local exchange potential vx(r) calculated in the exchange-only Kohn-Sham (KS) scheme of the density functional theory (DFT) is very well represented within the region of every atomic shell by each of the suitably shifted potentials obtained with the nonlocal Fock exchange operator for the individual Hartree-Fock (HF) orbitals belonging to this shell. This newly revealed property is not related to the well-known steplike shell structure in the response part of vx(r), but it results from specific relations satisfied by the HF orbital exchange potentials. These relations explain the outstanding proximity of the occupied HF and exchange-only KS orbitals as well as the high quality of the Krieger-Li-Iafrate and localized HF (or, equivalently, common-energy-denominator) approximations to the DFT exchange potential vx(r). Another highly accurate representation of vx(r) is given by the continuous piecewise function built of shell-specific exchange potentials, each defined as the weighted average of the shifted orbital exchange potentials corresponding to a given shell. The constant shifts added to the HF orbital exchange potentials, to map them onto vx(r), are nearly equal to the differences between the energies of the corresponding KS and HF orbitals. It is discussed why these differences are positive and grow when the respective orbital energies become lower for inner orbitals.
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory
NASA Astrophysics Data System (ADS)
Stolk, Christiaan C.
2016-06-01
We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.
Revealing Relationships among Relevant Climate Variables with Information Theory
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Golera, Anthony; Curry, Charles T.; Huyser, Karen A.; Kevin R. Wheeler; Rossow, William B.
2005-01-01
The primary objective of the NASA Earth-Sun Exploration Technology Office is to understand the observed Earth climate variability, thus enabling the determination and prediction of the climate's response to both natural and human-induced forcing. We are currently developing a suite of computational tools that will allow researchers to calculate, from data, a variety of information-theoretic quantities such as mutual information, which can be used to identify relationships among climate variables, and transfer entropy, which indicates the possibility of causal interactions. Our tools estimate these quantities along with their associated error bars, the latter of which is critical for describing the degree of uncertainty in the estimates. This work is based upon optimal binning techniques that we have developed for piecewise-constant, histogram-style models of the underlying density functions. Two useful side benefits have already been discovered. The first allows a researcher to determine whether there exist sufficient data to estimate the underlying probability density. The second permits one to determine an acceptable degree of round-off when compressing data for efficient transfer and storage. We also demonstrate how mutual information and transfer entropy can be applied so as to allow researchers not only to identify relations among climate variables, but also to characterize and quantify their possible causal interactions.
NASA Astrophysics Data System (ADS)
Silva, Hector O.; Yunes, Nicolás
2018-01-01
Certain bulk properties of neutron stars, in particular their moment of inertia, rotational quadrupole moment and tidal Love number, when properly normalized, are related to one another in a nearly equation of state independent way. The goal of this paper is to test these relations with extreme equations of state at supranuclear densities constrained to satisfy only a handful of generic, physically sensible conditions. By requiring that the equation of state be (i) barotropic and (ii) its associated speed of sound be real, we construct a piecewise function that matches a tabulated equation of state at low densities, while matching a stiff equation of state parametrized by its sound speed in the high-density region. We show that the I-Love-Q relations hold to 1 percent with this class of equations of state, even in the extreme case where the speed of sound becomes superluminal and independently of the transition density. We also find further support for the interpretation of the I-Love-Q relations as an emergent symmetry due to the nearly constant eccentricity of isodensity contours inside the star. These results reinforce the robustness of the I-Love-Q relations against our current incomplete picture of physics at supranuclear densities, while strengthening our confidence in the applicability of these relations in neutron star astrophysics.
NASA Astrophysics Data System (ADS)
Samaille, T.; Colliot, O.; Cuingnet, R.; Jouvent, E.; Chabriat, H.; Dormont, D.; Chupin, M.
2012-02-01
White matter hyperintensities (WMH), commonly seen on FLAIR images in elderly people, are a risk factor for dementia onset and have been associated with motor and cognitive deficits. We present here a method to fully automatically segment WMH from T1 and FLAIR images. Iterative steps of non linear diffusion followed by watershed segmentation were applied on FLAIR images until convergence. Diffusivity function and associated contrast parameter were carefully designed to adapt to WMH segmentation. It resulted in piecewise constant images with enhanced contrast between lesions and surrounding tissues. Selection of WMH areas was based on two characteristics: 1) a threshold automatically computed for intensity selection, 2) main location of areas in white matter. False positive areas were finally removed based on their proximity with cerebrospinal fluid/grey matter interface. Evaluation was performed on 67 patients: 24 with amnestic mild cognitive impairment (MCI), from five different centres, and 43 with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoaraiosis (CADASIL) acquired in a single centre. Results showed excellent volume agreement with manual delineation (Pearson coefficient: r=0.97, p<0.001) and substantial spatial correspondence (Similarity Index: 72%+/-16%). Our method appeared robust to acquisition differences across the centres as well as to pathological variability.
Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data
Clark, Darin P.; Badea, Cristian T.
2014-01-01
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
Applications of compressed sensing image reconstruction to sparse view phase tomography
NASA Astrophysics Data System (ADS)
Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian
2017-10-01
X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demissie, Taye B.
2015-12-31
This presentation demonstrates the relativistic effects on the spin-rotation constants, absolute nuclear magnetic resonance (NMR) shielding constants and shielding spans of {sup 175}LuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br, {sup 127}I) molecules. The results are obtained from calculations performed using density functional theory (non-relativistic and four-component relativistic) and coupled-cluster calculations. The spin-rotation constants are compared with available experimental values. In most of the molecules studied, relativistic effects make an order of magnitude difference on the NMR absolute shielding constants.
On a perturbed Sparre Andersen risk model with multi-layer dividend strategy
NASA Astrophysics Data System (ADS)
Yang, Hu; Zhang, Zhimin
2009-10-01
In this paper, we consider a perturbed Sparre Andersen risk model, in which the inter-claim times are generalized Erlang(n) distributed. Under the multi-layer dividend strategy, piece-wise integro-differential equations for the discounted penalty functions are derived, and a recursive approach is applied to express the solutions. A numerical example to calculate the ruin probabilities is given to illustrate the solution procedure.
Imaging Freeform Optical Systems Designed with NURBS Surfaces
2015-12-01
reflective, anastigmat 1 Introduction The imaging freeform optical systems described here are designed using non-uniform rational basis -spline (NURBS...from piecewise splines. Figure 1 shows a third degree NURBS surface which is formed from cubic basis splines. The surface is defined by the set of...with mathematical details covered by Piegl and Tiller7. Compare this with Gaussian basis functions8 where it is challenging to provide smooth
Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting
NASA Astrophysics Data System (ADS)
Palenichka, Roman M.; Zaremba, Marek B.
2003-03-01
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
Elastic constants and pressure derivative of elastic constants of Si1-xGex solid solution
NASA Astrophysics Data System (ADS)
Jivani, A. R.; Baria, J. K.; Vyas, P. S.; Jani, A. R.
2013-02-01
Elastic properties of Si1-xGex solid solution with arbitrary (atomic) concentration (x) are studied using the pseudo-alloy atom model based on the pseudopotential theory and on the higher-order perturbation scheme with the application of our own proposed model potential. We have used local-field correction function proposed by Sarkar et al to study Si-Ge system. The Elastic constants and pressure derivatives of elastic constants of the solid solution is investigated with different concentration x of Ge. It is found in the present study that the calculated numerical values of the aforesaid physical properties of Si-Ge system are function of x. The elastic constants (C11, C12 and C44) decrease linearly with increase in concentration x and pressure derivative of elastic constants (C11, C12 and C44) increase with the concentration x of Ge. This study provides better set of theoretical results for such solid solution for further comparison either with theoretical or experimental results.
A green vehicle routing problem with customer satisfaction criteria
NASA Astrophysics Data System (ADS)
Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.
2016-12-01
This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.
NASA Technical Reports Server (NTRS)
Horvath, P.; Latham, G. V.; Nakamura, Y.; Dorman, H. J.
1980-01-01
The horizontal-to-vertical amplitude ratios of the long-period seismograms are reexamined to determine the shear wave velocity distributions at the Apollo 12, 14, 15, and 16 lunar landing sites. Average spectral ratios, computed from a number of impact signals, were compared with spectral ratios calculated for the fundamental mode Rayleigh waves in media consisting of homogeneous, isotropic, horizontal layers. The shear velocities of the best fitting models at the different sites resemble each other and differ from the average for all sites by not more than 20% except for the bottom layer at station 14. The shear velocities increase from 40 m/s at the surface to about 400 m/s at depths between 95 and 160 m at the various sites. Within this depth range the velocity-depth functions are well represented by two piecewise linear segments, although the presence of first-order discontinuities cannot be ruled out.
A Novel Approach to Model the Air-Side Heat Transfer in Microchannel Condensers
NASA Astrophysics Data System (ADS)
Martínez-Ballester, S.; Corberán, José-M.; Gonzálvez-Maciá, J.
2012-11-01
The work presents a model (Fin1D×3) for microchannel condensers and gas coolers. The paper focusses on the description of the novel approach employed to model the air-side heat transfer. The model applies a segment-by-segment discretization to the heat exchanger adding, in each segment, a specific bi-dimensional grid to the air flow and fin wall. Given this discretization, the fin theory is applied by using a continuous piecewise function for the fin wall temperature. It allows taking into account implicitly the heat conduction between tubes along the fin, and the unmixed air influence on the heat capacity. The model has been validated against experimental data resulting in predicted capacity errors within ± 5%. Differences on prediction results and computational cost were studied and compared with the previous authors' model (Fin2D) and with other simplified model. Simulation time of the proposed model was reduced one order of magnitude respect the Fin2D's time retaining its same accuracy.
Refined Zigzag Theory for Laminated Composite and Sandwich Plates
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco
2009-01-01
A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.
ERIC Educational Resources Information Center
Hall, Peter M.; Spencer-Hall, Dee Ann
A study of two small-to-middle-sized midwestern school districts, each observed for over a year, shows that the negotiated order concept can provide a useful framework for viewing schools' organizational functions. According to the negotiated order concept, organizational relationships require constant negotiations concerning values, goals, rules,…
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
What Can Tobit-Piecewise Regression Tell Us about the Determinants of Household Educational Debt?
ERIC Educational Resources Information Center
Thipbharos, Titirut
2014-01-01
Educational debt as part of household debt remains a problem for Thailand. The significant factors of household characteristics with regard to educational debt are shown by constructing a Tobit-piecewise regression for three different clusters, namely poor, middle and affluent households in Thailand. It was found that household debt is likely to…
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2015-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…
ERIC Educational Resources Information Center
Hindman, Annemarie H.; Cromley, Jennifer G.; Skibbe, Lori E.; Miller, Alison L.
2011-01-01
This article reviews the mechanics of conventional and piecewise growth models to demonstrate the unique affordances of each technique for examining the nature and predictors of children's early literacy learning during the transition from preschool through first grade. Using the nationally representative Family and Child Experiences Survey…
Quasi-conformal mapping with genetic algorithms applied to coordinate transformations
NASA Astrophysics Data System (ADS)
González-Matesanz, F. J.; Malpica, J. A.
2006-11-01
In this paper, piecewise conformal mapping for the transformation of geodetic coordinates is studied. An algorithm, which is an improved version of a previous algorithm published by Lippus [2004a. On some properties of piecewise conformal mappings. Eesti NSV Teaduste Akademmia Toimetised Füüsika-Matemaakika 53, 92-98; 2004b. Transformation of coordinates using piecewise conformal mapping. Journal of Geodesy 78 (1-2), 40] is presented; the improvement comes from using a genetic algorithm to partition the complex plane into convex polygons, whereas the original one did so manually. As a case study, the method is applied to the transformation of the Spanish datum ED50 and ETRS89, and both its advantages and disadvantages are discussed herein.
NASA Astrophysics Data System (ADS)
Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.
2017-04-01
The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).
Numerical solution of the unsteady Navier-Stokes equation
NASA Technical Reports Server (NTRS)
Osher, Stanley J.; Engquist, Bjoern
1985-01-01
The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws are discussed. These schemes share many desirable properties with total variation diminishing schemes, but TVD schemes have at most first-order accuracy, in the sense of truncation error, at extrema of the solution. In this paper a uniformly second-order approximation is constructed, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.
LETTER TO THE EDITOR: Fractal diffusion coefficient from dynamical zeta functions
NASA Astrophysics Data System (ADS)
Cristadoro, Giampaolo
2006-03-01
Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirman, Christopher R., E-mail: ckirman@summittoxi
To extend previous models of hexavalent chromium [Cr(VI)] reduction by gastric fluid (GF), ex vivo experiments were conducted to address data gaps and limitations identified with respect to (1) GF dilution in the model; (2) reduction of Cr(VI) in fed human GF samples; (3) the number of Cr(VI) reduction pools present in human GF under fed, fasted, and proton pump inhibitor (PPI)-use conditions; and (4) an appropriate form for the pH-dependence of Cr(VI) reduction rate constants. Rates and capacities of Cr(VI) reduction were characterized in gastric contents from fed and fasted volunteers, and from fasted pre-operative patients treated with PPIs.more » Reduction capacities were first estimated over a 4-h reduction period. Once reduction capacity was established, a dual-spike approach was used in speciated isotope dilution mass spectrometry analyses to characterize the concentration-dependence of the 2nd order reduction rate constants. These data, when combined with previously collected data, were well described by a three-pool model (pool 1 = fast reaction with low capacity; pool 2 = slow reaction with higher capacity; pool 3 = very slow reaction with higher capacity) using pH-dependent rate constants characterized by a piecewise, log-linear relationship. These data indicate that human gastric samples, like those collected from rats and mice, contain multiple pools of reducing agents, and low concentrations of Cr(VI) (< 0.7 mg/L) are reduced more rapidly than high concentrations. The data and revised modeling results herein provide improved characterization of Cr(VI) gastric reduction kinetics, critical for Cr(VI) pharmacokinetic modeling and human health risk assessment. - Highlights: • SIDMS allows for measurement of Cr(VI) reduction rate in gastric fluid ex vivo • Human gastric fluid has three reducing pools • Cr(VI) in drinking water at < 0.7 mg/L is rapidly reduced in human gastric fluid • Reduction rate is concentration- and pH-dependent • A refined PK model is used to characterize inter-individual variability in Cr(VI) gastric reduction capacity.« less
NASA Astrophysics Data System (ADS)
Kompany-Zareh, Mohsen; Khoshkam, Maryam
2013-02-01
This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
Separation of Gadolinium (Gd) using Synergic Solvent Mixed Topo-D2EHPA with Extraction Method.
NASA Astrophysics Data System (ADS)
Effendy, N.; Basuki, K. T.; Biyantoro, D.; Perwira, N. K.
2018-04-01
The main problem to obtain Gd with high purity is the similarity of chemical properties and physical properties with the other rare earth elements (REE) such as Y and Dy, it is necessary to do separation by the extraction process. The purpose of this research to determine the best solvent type, amount of solvent, feed and solvent ratio in the Gd extraction process, to determine the rate order and the value of the rate constant of Gd concentration based on experimental data of aqueous phase concentration as a function of time and to know the effect of temperature on the reaction speed constant. This research was conducted on variation of solvent, amount of solvent, feed and solvent ratio in the extraction process of Gd separation, extraction time to determine the order value and the rate constant of Gd concentration in extraction process based on the aqueous phase concentration data as a function of time, to the rate constant of decreasing concentration of Gd. Based on the calculation results, the solvent composition was obtained with the best feed to separate the rare earth elements Gd in the extraction process is 1 : 4 with 15% concentration of TOPO and 10% concentration of D2EHPA. The separation process of Gd using extraction method by solvent TOPO-D2EHPA 2 : 1 comparison is better than single solvent D2EHPA / TOPO because of the synergistic effect. The rate order of separation process of Gd follows order 1. The Arrhenius Gd equation becomes k = 1.46 x 10-7 exp (-6.96 kcal / mol / RT).
Doubly stochastic radial basis function methods
NASA Astrophysics Data System (ADS)
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
Harthcock, Colin; Jahanbekam, Abdolreza; Eskelsen, Jeremy R; Lee, David Y
2016-11-01
We describe an example of a piecewise gas chamber that can be customized to incorporate a low flux of gas-phase radicals with an existing surface analysis chamber for in situ and stepwise gas-surface interaction experiments without any constraint in orientation. The piecewise nature of this gas chamber provides complete angular freedom and easy alignment and does not require any modification of the existing surface analysis chamber. In addition, the entire gas-surface system is readily differentially pumped with the surface chamber kept under ultra-high-vacuum during the gas-surface measurements. This new design also allows not only straightforward reconstruction to accommodate the orientation of different surface chambers but also for the addition of other desired features, such as an additional pump to the current configuration. Stepwise interaction between atomic oxygen and a highly ordered pyrolytic graphite surface was chosen to test the effectiveness of this design, and the site-dependent O-atom chemisorption and clustering on the graphite surface were resolved by a scanning tunneling microscope in the nm-scale. X-ray photoelectron spectroscopy was used to further confirm the identity of the chemisorbed species on the graphite surface as oxygen.
NASA Astrophysics Data System (ADS)
Bajaj, Nikhil; Chiu, George T.-C.; Rhoads, Jeffrey F.
2018-07-01
Vibration-based sensing modalities traditionally have relied upon monitoring small shifts in natural frequency in order to detect structural changes (such as those in mass or stiffness). In contrast, bifurcation-based sensing schemes rely on the detection of a qualitative change in the behavior of a system as a parameter is varied. This can produce easy-to-detect changes in response amplitude with high sensitivity to structural change, but requires resonant devices with specific dynamic behavior which is not always easily reproduced. Desirable behavior for such devices can be produced reliably via nonlinear feedback circuitry, but has in past efforts been largely limited to sub-MHz operation, partially due to the time delay limitations present in certain nonlinear feedback circuits, such as multipliers. This work demonstrates the design and implementation of a piecewise-linear resonator realized via diode- and integrated circuit-based feedback electronics and a quartz crystal resonator. The proposed system is fabricated and characterized, and the creation and selective placement of the bifurcation points of the overall electromechanical system is demonstrated by tuning the circuit gains. The demonstrated circuit operates at 16 MHz. Preliminary modeling and analysis is presented that qualitatively agrees with the experimentally-observed behavior.
The piecewise parabolic method for Riemann problems in nonlinear elasticity.
Zhang, Wei; Wang, Tao; Bai, Jing-Song; Li, Ping; Wan, Zhen-Hua; Sun, De-Jun
2017-10-18
We present the application of Harten-Lax-van Leer (HLL)-type solvers on Riemann problems in nonlinear elasticity which undergoes high-load conditions. In particular, the HLLD ("D" denotes Discontinuities) Riemann solver is proved to have better robustness and efficiency for resolving complex nonlinear wave structures compared with the HLL and HLLC ("C" denotes Contact) solvers, especially in the shock-tube problem including more than five waves. Also, Godunov finite volume scheme is extended to higher order of accuracy by means of piecewise parabolic method (PPM), which could be used with HLL-type solvers and employed to construct the fluxes. Moreover, in the case of multi material components, level set algorithm is applied to track the interface between different materials, while the interaction of interfaces is realized through HLLD Riemann solver combined with modified ghost method. As seen from the results of both the solid/solid "stick" problem with the same material at the two sides of contact interface and the solid/solid "slip" problem with different materials at the two sides, this scheme composed of HLLD solver, PPM and level set algorithm can capture the material interface effectively and suppress spurious oscillations therein significantly.
Filter for third order phase locked loops
NASA Technical Reports Server (NTRS)
Crow, R. B.; Tausworthe, R. C. (Inventor)
1973-01-01
Filters for third-order phase-locked loops are used in receivers to acquire and track carrier signals, particularly signals subject to high doppler-rate changes in frequency. A loop filter with an open-loop transfer function and set of loop constants, setting the damping factor equal to unity are provided.
BLUES function method in computational physics
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Energy scale of Lorentz violation in Rainbow Gravity
NASA Astrophysics Data System (ADS)
Nilsson, Nils A.; Dąbrowski, Mariusz P.
2017-12-01
We modify the standard relativistic dispersion relation in a way which breaks Lorentz symmetry-the effect is predicted in a high-energy regime of some modern theories of quantum gravity. We show that it is possible to realise this scenario within the framework of Rainbow Gravity which introduces two new energy-dependent functions f1(E) and f2(E) into the dispersion relation. Additionally, we assume that the gravitational constant G and the cosmological constant Λ also depend on energy E and introduce the scaling function h(E) in order to express this dependence. For cosmological applications we specify the functions f1 and f2 in order to fit massless particles which allows us to derive modified cosmological equations. Finally, by using Hubble+SNIa+BAO(BOSS+Lyman α)+CMB data, we constrain the energy scale ELV to be at least of the order of 1016 GeV at 1 σ which is the GUT scale or even higher 1017 GeV at 3 σ. Our claim is that this energy can be interpreted as the decoupling scale of massless particles from spacetime Lorentz violating effects.
Dynamics and stability of a 2D ideal vortex under external strain
NASA Astrophysics Data System (ADS)
Hurst, N. C.; Danielson, J. R.; Dubin, D. H. E.; Surko, C. M.
2017-11-01
The behavior of an initially axisymmetric 2D ideal vortex under an externally imposed strain flow is studied experimentally. The experiments are carried out using electron plasmas confined in a Penning-Malmberg trap; here, the dynamics of the plasma density transverse to the field are directly analogous to the dynamics of vorticity in a 2D ideal fluid. An external strain flow is applied using boundary conditions in a way that is consistent with 2D fluid dynamics. Data are compared to predictions from a theory assuming a piecewise constant elliptical vorticity distribution. Excellent agreement is found for quasi-flat profiles, whereas the dynamics of smooth profiles feature modified stability limits and inviscid damping of periodic elliptical distortions. This work supported by U.S. DOE Grants DE-SC0002451 and DE-SC0016532, and NSF Grant PHY-1414570.
Gradient Optimization for Analytic conTrols - GOAT
NASA Astrophysics Data System (ADS)
Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.
Evaluation of trends in wheat yield models
NASA Technical Reports Server (NTRS)
Ferguson, M. C.
1982-01-01
Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.
The observational constraint on constant-roll inflation
NASA Astrophysics Data System (ADS)
Gao, Qing
2018-07-01
We discuss the constant-roll inflation with constant ɛ2 and constant \\bar η . By using the method of Bessel function approximation, the analytical expressions for the scalar and tensor power spectra, the scalar and tensor spectral tilts, and the tensor to scalar ratio are derived up to the first order of ɛ1. The model with constant ɛ2 is ruled out by the observations at the 3σ confidence level, and the model with constant \\bar η is consistent with the observations at the 1σ confidence level. The potential for the model with constant \\bar η is also obtained from the Hamilton-Jacobi equation. Although the observations constrain the constant-roll inflation to be the slow-roll inflation, the n s- r results from the constant-roll inflation are not the same as those from the slow-roll inflation even when \\bar η 0.01.
Long Periodic Terms in the Solar System
NASA Technical Reports Server (NTRS)
Bretagnon, P.
1982-01-01
The long period variations of the first eight planets in the solar system are studied. First, the Lagrangian solution is calculated and then the long period terms with fourth order eccentricities and inclinations are introduced into the perturbation function. A second approximation was made taking into account the short period terms' contribution, namely the perturbations of first order with respect to the masses. Special attention was paid to the determination of the integration constants. The relative importance of the different contributions is shown. It is useless, for example, to introduce the long period terms of fifth order if no account has been taken of the short period terms. Meanwhile, the terms that have been neglected would not introduce large changes in the integration constants. Even so, the calculation should be repeated with higher order short period terms and fifth order long periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.
In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; ...
2018-04-26
In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less
Resonant activation in piecewise linear asymmetric potentials.
Fiasconaro, Alessandro; Spagnolo, Bernardo
2011-04-01
This work analyzes numerically the role played by the asymmetry of a piecewise linear potential, in the presence of both a Gaussian white noise and a dichotomous noise, on the resonant activation phenomenon. The features of the asymmetry of the potential barrier arise by investigating the stochastic transitions far behind the potential maximum, from the initial well to the bottom of the adjacent potential well. Because of the asymmetry of the potential profile together with the random external force uniform in space, we find, for the different asymmetries: (1) an inversion of the curves of the mean first passage time in the resonant region of the correlation time τ of the dichotomous noise, for low thermal noise intensities; (2) a maximum of the mean velocity of the Brownian particle as a function of τ; and (3) an inversion of the curves of the mean velocity and a very weak current reversal in the miniratchet system obtained with the asymmetrical potential profiles investigated. An inversion of the mean first passage time curves is also observed by varying the amplitude of the dichotomous noise, behavior confirmed by recent experiments. ©2011 American Physical Society
A study of different modeling choices for simulating platelets within the immersed boundary method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations – radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations – for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations. PMID:23585704
Affine connection form of Regge calculus
NASA Astrophysics Data System (ADS)
Khatsymovsky, V. M.
2016-12-01
Regge action is represented analogously to how the Palatini action for general relativity (GR) as some functional of the metric and a general connection as independent variables represents the Einstein-Hilbert action. The piecewise flat (or simplicial) spacetime of Regge calculus is equipped with some world coordinates and some piecewise affine metric which is completely defined by the set of edge lengths and the world coordinates of the vertices. The conjugate variables are the general nondegenerate matrices on the three-simplices which play the role of a general discrete connection. Our previous result on some representation of the Regge calculus action in terms of the local Euclidean (Minkowsky) frame vectors and orthogonal connection matrices as independent variables is somewhat modified for the considered case of the general linear group GL(4, R) of the connection matrices. As a result, we have some action invariant w.r.t. arbitrary change of coordinates of the vertices (and related GL(4, R) transformations in the four-simplices). Excluding GL(4, R) connection from this action via the equations of motion we have exactly the Regge action for the considered spacetime.
Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Steinwolf, Alexander
2005-01-01
The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.
NASA Astrophysics Data System (ADS)
Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.
2018-01-01
In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baikov, P. A.; Chetyrkin, K. G.; Kuehn, J. H.
2010-04-02
We compute, for the first time, the order {alpha}{sub s}{sup 4} contributions to the Bjorken sum rule for polarized electron-nucleon scattering and to the (nonsinglet) Adler function for the case of a generic color gauge group. We confirm at the same order a (generalized) Crewther relation which provides a strong test of the correctness of our previously obtained results: the QCD Adler function and the five-loop {beta} function in quenched QED. In particular, the appearance of an irrational contribution proportional to {zeta}{sub 3} in the latter quantity is confirmed. We obtain the commensurate scale equation relating the effective strong couplingmore » constants as inferred from the Bjorken sum rule and from the Adler function at order {alpha}{sub s}{sup 4}.« less
Asymptotically Vanishing Cosmological Constant in the Multiverse
NASA Astrophysics Data System (ADS)
Kawai, Hikaru; Okada, Takashi
We study the problem of the cosmological constant in the context of the multiverse in Lorentzian space-time, and show that the cosmological constant will vanish in the future. This sort of argument was started by Sidney Coleman in 1989, and he argued that the Euclidean wormholes make the multiverse partition function a superposition of various values of the cosmological constant Λ, which has a sharp peak at Λ = 0. However, the implication of the Euclidean analysis to our Lorentzian space-time is unclear. With this motivation, we analyze the quantum state of the multiverse in Lorentzian space-time by the WKB method, and calculate the density matrix of our universe by tracing out the other universes. Our result predicts vanishing cosmological constant. While Coleman obtained the enhancement at Λ = 0 through the action itself, in our Lorentzian analysis the similar enhancement arises from the front factor of eiS in the universe wave function, which is in the next leading order in the WKB approximation.
An approach to the determination of aircraft handling qualities using pilot transfer functions
NASA Technical Reports Server (NTRS)
Adams, J. J.; Hatch, H. G., Jr.
1978-01-01
It was shown that a correlation exists between pilot-aircraft system closed-loop characteristics, determined by using analytical expressions for pilot response along with the analytical expression for the aircraft response, and pilot ratings obtained in many previous flight and simulation studies. Two different levels of preferred pilot response were used. These levels were: (1) a static gain and a second-order lag function with a lag time constant of 0.2 second; and (2) a static gain, a lead time constant of 1 second, and a 0.2-second lag time constant. If a system response with a pitch-angle time constant of 2.6 seconds and a stable oscillatory mode of motion with a period of 2.5 seconds could be achieved with the first-level pilot model, it was shown that the pilot rating will be satisfactory for that vehicle.
ERIC Educational Resources Information Center
Zvoch, Keith
2016-01-01
Piecewise growth models (PGMs) were used to estimate and model changes in the preliteracy skill development of kindergartners in a moderately sized school district in the Pacific Northwest. PGMs were applied to interrupted time-series (ITS) data that arose within the context of a response-to-intervention (RtI) instructional framework. During the…
Theoretical study of orbital ordering induced structural phase transition in iron pnictides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jena, Sushree Sangita, E-mail: sushree@iopb.res.in; Rout, G. C., E-mail: gcr@iopb.res.in; Panda, S. K., E-mail: skp@iopb.res.in
2016-05-06
We attribute the structural phase transition (SPT) in the parent compounds of the iron pnictides to orbital ordering. Due to anisotropy of the d{sub xz} and d{sub yz} orbitals in the xy plane, orbital ordering makes the orthorhombic structure more favorable and thus inducing the SPT. We consider a one band model Hamiltonian consisting of first and second-nearest-neighbor hopping of the electrons. We introduce Jahn-Tellar (JT) distortion in the system arising due to the orbital ordering present in this system. We calculate the electron Green’s function by using Zuvareb’s Green’s function technique and hence calculate an expression for the temperaturemore » dependent lattice strain which is computed numerically and self-consistently. The temperature dependent electron specific heat is calculated by minimizing the free energy of the system. The lattice strain is studied by varying the JT coupling and elastic constant of the system. The structural anomaly is studied through the electron occupation number and the specific heat by varying the physical parameters like JT coupling, lattice constant, chemical potential and hopping integrals of the system.« less
A hybrid approach to near-optimal launch vehicle guidance
NASA Technical Reports Server (NTRS)
Leung, Martin S. K.; Calise, Anthony J.
1992-01-01
This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.
Zero-lag synchronization in coupled time-delayed piecewise linear electronic circuits
NASA Astrophysics Data System (ADS)
Suresh, R.; Srinivasan, K.; Senthilkumar, D. V.; Raja Mohamed, I.; Murali, K.; Lakshmanan, M.; Kurths, J.
2013-07-01
We investigate and report an experimental confirmation of zero-lag synchronization (ZLS) in a system of three coupled time-delayed piecewise linear electronic circuits via dynamical relaying with different coupling configurations, namely mutual and subsystem coupling configurations. We have observed that when there is a feedback between the central unit (relay unit) and at least one of the outer units, ZLS occurs in the two outer units whereas the central and outer units exhibit inverse phase synchronization (IPS). We find that in the case of mutual coupling configuration ZLS occurs both in periodic and hyperchaotic regimes, while in the subsystem coupling configuration it occurs only in the hyperchaotic regime. Snapshots of the time evolution of outer circuits as observed from the oscilloscope confirm the occurrence of ZLS experimentally. The quality of ZLS is numerically verified by correlation coefficient and similarity function measures. Further, the transition to ZLS is verified from the changes in the largest Lyapunov exponents and the correlation coefficient as a function of the coupling strength. IPS is experimentally confirmed using time series plots and also can be visualized using the concept of localized sets which are also corroborated by numerical simulations. In addition, we have calculated the correlation of probability of recurrence to quantify the phase coherence. We have also analytically derived a sufficient condition for the stability of ZLS using the Krasovskii-Lyapunov theory.
Provasi, Patricio F; Sauer, Stephan P A
2006-07-01
The angular dependence of the vicinal fluorine-fluorine coupling constant, (3)JFF, for 1,2-difluoroethane has been investigated with several polarization propagator methods. (3)JFF and its four Ramsey contributions were calculated using the random phase approximation (RPA), its multiconfigurational generalization, and both second-order polarization propagator approximations (SOPPA and SOPPA(CCSD)), using locally dense basis sets. The geometries were optimized for each dihedral angle at the level of density functional theory using the B3LYP functional and fourth-order Møller-Plesset perturbation theory. The resulting coupling constant curves were fitted to a cosine series with 8 coefficients. Our results are compared with those obtained previously and values estimated from experiment. It is found that the inclusion of electron correlation in the calculation of (3)JFF reduces the absolute values. This is mainly due to changes in the FC contribution, which for dihedral angles around the trans conformation even changes its sign. This sign change is responsible for the breakdown of the Karplus-like curve.
Scalar self-force for highly eccentric equatorial orbits in Kerr spacetime
NASA Astrophysics Data System (ADS)
Thornburg, Jonathan; Wardell, Barry
2017-04-01
If a small "particle" of mass μ M (with μ ≪1 ) orbits a black hole of mass M , the leading-order radiation-reaction effect is an O (μ2) "self-force" acting on the particle, with a corresponding O (μ ) "self-acceleration" of the particle away from a geodesic. Such "extreme-mass-ratio inspiral" systems are likely to be important gravitational-wave sources for future space-based gravitational-wave detectors. Here we consider the "toy model" problem of computing the self-force for a scalar-field particle on a bound eccentric orbit in Kerr spacetime. We use the Barack-Golbourn-Vega-Detweiler effective-source regularization with a 4th-order puncture field, followed by an ei m ϕ ("m -mode") Fourier decomposition and a separate time-domain numerical evolution in 2 +1 dimensions for each m . We introduce a finite worldtube that surrounds the particle worldline and define our evolution equations in a piecewise manner so that the effective source is only used within the worldtube. Viewed as a spatial region, the worldtube moves to follow the particle's orbital motion. We use slices of constant Boyer-Lindquist time in the region of the particle's motion, deformed to be asymptotically hyperboloidal and compactified near the horizon and J+ . Our numerical evolution uses Berger-Oliger mesh refinement with 4th-order finite differencing in space and time. Our computational scheme allows computation for highly eccentric orbits and should be generalizable to orbital evolution in the future. Our present implementation is restricted to equatorial geodesic orbits, but this restriction is not fundamental. We present numerical results for a number of test cases with orbital eccentricities as high as 0.98. In some cases we find large oscillations ("wiggles") in the self-force on the outgoing leg of the orbit shortly after periastron passage; these appear to be caused by the passage of the orbit through the strong-field region close to the background Kerr black hole.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jian Hua; Gooding, R.J.
1994-06-01
We propose an algorithm to solve a system of partial differential equations of the type u[sub t](x,t) = F(x, t, u, u[sub x], u[sub xx], u[sub xxx], u[sub xxxx]) in 1 + 1 dimensions using the method of lines with piecewise ninth-order Hermite polynomials, where u and F and N-dimensional vectors. Nonlinear boundary conditions are easily incorporated with this method. We demonstrate the accuracy of this method through comparisons of numerically determine solutions to the analytical ones. Then, we apply this algorithm to a complicated physical system involving nonlinear and nonlocal strain forces coupled to a thermal field. 4 refs.,more » 5 figs., 1 tab.« less
Low Dose Radiation Cancer Risks: Epidemiological and Toxicological Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
David G. Hoel, PhD
2012-04-19
The basic purpose of this one year research grant was to extend the two stage clonal expansion model (TSCE) of carcinogenesis to exposures other than the usual single acute exposure. The two-stage clonal expansion model of carcinogenesis incorporates the biological process of carcinogenesis, which involves two mutations and the clonal proliferation of the intermediate cells, in a stochastic, mathematical way. The current TSCE model serves a general purpose of acute exposure models but requires numerical computation of both the survival and hazard functions. The primary objective of this research project was to develop the analytical expressions for the survival functionmore » and the hazard function of the occurrence of the first cancer cell for acute, continuous and multiple exposure cases within the framework of the piece-wise constant parameter two-stage clonal expansion model of carcinogenesis. For acute exposure and multiple exposures of acute series, it is either only allowed to have the first mutation rate vary with the dose, or to have all the parameters be dose dependent; for multiple exposures of continuous exposures, all the parameters are allowed to vary with the dose. With these analytical functions, it becomes easy to evaluate the risks of cancer and allows one to deal with the various exposure patterns in cancer risk assessment. A second objective was to apply the TSCE model with varing continuous exposures from the cancer studies of inhaled plutonium in beagle dogs. Using step functions to estimate the retention functions of the pulmonary exposure of plutonium the multiple exposure versions of the TSCE model was to be used to estimate the beagle dog lung cancer risks. The mathematical equations of the multiple exposure versions of the TSCE model were developed. A draft manuscript which is attached provides the results of this mathematical work. The application work using the beagle dog data from plutonium exposure has not been completed due to the fact that the research project did not continue beyond its first year.« less
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Liu, Yan; Ma, Jianhua; Zhang, Hao; Wang, Jing; Liang, Zhengrong
2014-01-01
Background The negative effects of X-ray exposure, such as inducing genetic and cancerous diseases, has arisen more attentions. Objective This paper aims to investigate a penalized re-weighted least-square (PRWLS) strategy for low-mAs X-ray computed tomography image reconstruction by incorporating an adaptive weighted total variation (AwTV) penalty term and a noise variance model of projection data. Methods An AwTV penalty is introduced in the objective function by considering both piecewise constant property and local nearby intensity similarity of the desired image. Furthermore, the weight of data fidelity term in the objective function is determined by our recent study on modeling variance estimation of projection data in the presence of electronic background noise. Results The presented AwTV-PRWLS algorithm can achieve the highest full-width-at-half-maximum (FWHM) measurement, for data conditions of (1) full-view 10mA acquisition and (2) sparse-view 80mA acquisition. In comparison between the AwTV/TV-PRWLS strategies and the previous reported AwTV/TV-projection onto convex sets (AwTV/TV-POCS) approaches, the former can gain in terms of FWHM for data condition (1), but cannot gain for the data condition (2). Conclusions In the case of full-view 10mA projection data, the presented AwTV-PRWLS shows potential improvement. However, in the case of sparse-view 80mA projection data, the AwTV/TV-POCS shows advantage over the PRWLS strategies. PMID:25080113
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eising, R.; Gerhardt, B.
1987-06-01
First order rate constant for the degradation (degradation constants) of catalase in the cotyledons of sunflower (Helianthus annuus L.) were determined by measuring the loss of catalase containing /sup 14/C-labeled heme. During greening of the cotyledons, a period when peroxisomes change from glyoxysomal to leaf peroxisomal function, the degradation of glyoxysomal catalase is significantly slower than during all other stages of cotyledon development in light or darkness. The degradation constant during the transition stage of peroxisome function amounts to 0.205 day/sup -1/ in contrast to the constants ranging from 0.304 day/sup -1/ to 0.515 day/sup -1/ during the other developmentalmore » stages. Density labeling experiments comprising labeling of catalase with /sup 2/H/sub 2/O and its isopycnic centrifugation on CsCl gradients demonstrated that the determinations of the degradation constants were not substantially affected by reutilization of /sup 14/C-labeled compounds for catalase synthesis. The degradation constants for both glyoxysomal catalase and catalase synthesized during the transition of peroxisome function do not differ. This was shown by labeling the catalases with different isotopes and measuring the isotope ratio during the development of the cotyledons. The results are inconsistent with the concept that an accelerated and selective degradation of glyoxysomes underlies the change in peroxisome function. The data suggest that catalase degradation is at least partially due to an individual turnover of catalase and does not only result from a turnover of the whole peroxisomes.« less
NASA Technical Reports Server (NTRS)
Cantrell, C. A.; Davidson, J. A.; Mcdaniel, A. H.; Shetter, R. E.; Calvert, J. G.
1988-01-01
Direct determinations of the equilibrium constant for the reaction N2O5 = NO2 + NO3 were carried out by measuring NO2, NO3, and N2O5 using long-path visible and infrared absorption spectroscopy as a function of temperature from 243 to 397 K. The first-order decay rate constant of N2O5 was experimentally measured as a function of temperature. These results are in turn used to derive a value for the rate coefficient for the NO-forming channel in the reaction of NO3 with NO2. The implications of the results for atmospheric chemistry, the thermodynamics of NO3, and for laboratory kinetics studies are discussed.
Oxidation of octylphenol by ferrate(VI).
Anquandah, George A K; Sharma, Virender K
2009-01-01
The rates of the oxidation of octylphenols (OP) by potassium ferrate(VI) (K(2)FeO(4)) in water were determined as a function of pH (8.0-10.9) at 25 degrees C. The rate law for the oxidation of OP by Fe(VI) was found to be first order with each reactant. The observed second-order rate constants, k(obs), for the oxidation of alkylphenols decreased with an increase in pH. The speciation of Fe(VI) (HFeO(4)(-) and FeO(4)(2 -)) and OP (OP-OH and OP-O(-)) species were used to determine individual rate constants of the reactions. Comparison of rate constants and half-lives of oxidation of OP by Fe(VI) with nonylphenol (NP) and bisphenol-A (BPA) were conducted to demonstrate that Fe(VI) efficiently oxidizes environmentally relevant alkylphenols in water.
NASA Astrophysics Data System (ADS)
Peng, Q.; Liang, Chao; Ji, Wei; de, Suvranu
2013-03-01
We investigated the mechanical properties of graphene and graphane using first-principles calculations based on density-functional theory. A conventional unitcell containing a hexagonal ring made of carbon atoms was chosen to capture the finite wave vector ``soft modes'', which affect the the fourth and fifth elastic constants considerably. Graphane has about 2/3 ultimate strengths in all three tested deformation modes - armchair, zigzag, and biaxial- compared to graphene. However, graphane has larger ultimate strains in zigzag deformation, and smaller in armchair deformation. We obtained the second, third, fourth, and fifth order elastic constants for a rigorous continuum description of the elastic response. Graphane has a relatively low in-plane stiffness of 240 N/m which is about 2/3 of that of graphene, and a very small Poisson ratio of 0.078, 44% of that of graphene. The pressure dependence of the second order elastic constants were predicted from the third order elastic constants. The Poisson's ratio monotonically decreases with increasing pressure. Acknowledge the financial support from DTRA Grant # BRBAA08-C-2-0130, the U.S. NRCFDP # NRC-38-08-950, and U.S. DOE NEUP Grant #DE-NE0000325.
Traversable wormholes satisfying the weak energy condition in third-order Lovelock gravity
NASA Astrophysics Data System (ADS)
Zangeneh, Mahdi Kord; Lobo, Francisco S. N.; Dehghani, Mohammad Hossein
2015-12-01
In this paper, we consider third-order Lovelock gravity with a cosmological constant term in an n -dimensional spacetime M4×Kn -4, where Kn -4 is a constant curvature space. We decompose the equations of motion to four and higher dimensional ones and find wormhole solutions by considering a vacuum Kn -4 space. Applying the latter constraint, we determine the second- and third-order Lovelock coefficients and the cosmological constant in terms of specific parameters of the model, such as the size of the extra dimensions. Using the obtained Lovelock coefficients and Λ , we obtain the four-dimensional matter distribution threading the wormhole. Furthermore, by considering the zero tidal force case and a specific equation of state, given by ρ =(γ p -τ )/[ω (1 +γ )], we find the exact solution for the shape function which represents both asymptotically flat and nonflat wormhole solutions. We show explicitly that these wormhole solutions in addition to traversibility satisfy the energy conditions for suitable choices of parameters and that the existence of a limited spherically symmetric traversable wormhole with normal matter in a four-dimensional spacetime implies a negative effective cosmological constant.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
In-flight alignment using H ∞ filter for strapdown INS on aircraft.
Pei, Fu-Jun; Liu, Xuan; Zhu, Li
2014-01-01
In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition.
Maxwell’s demon in the quantum-Zeno regime and beyond
NASA Astrophysics Data System (ADS)
Engelhardt, G.; Schaller, G.
2018-02-01
The long-standing paradigm of Maxwell’s demon is till nowadays a frequently investigated issue, which still provides interesting insights into basic physical questions. Considering a single-electron transistor, where we implement a Maxwell demon by a piecewise-constant feedback protocol, we investigate quantum implications of the Maxwell demon. To this end, we harness a dynamical coarse-graining method, which provides a convenient and accurate description of the system dynamics even for high measurement rates. In doing so, we are able to investigate the Maxwell demon in a quantum-Zeno regime leading to transport blockade. We argue that there is a measurement rate providing an optimal performance. Moreover, we find that besides building up a chemical gradient, there can be also a regime where the feedback loop additionally extracts energy, which results from the energy non-conserving character of the projective measurement.
The crack problem in bonded nonhomogeneous materials
NASA Technical Reports Server (NTRS)
Erdogan, Fazil; Kaya, A. C.; Joseph, P. F.
1988-01-01
The plane elasticity problem for two bonded half planes containing a crack perpendicular to the interface was considered. The effect of very steep variations in the material properties near the diffusion plane on the singular behavior of the stresses and stress intensity factors were studied. The two materials were thus, assumed to have the shear moduli mu(o) and mu(o) exp (Beta x), x=0 being the diffusion plane. Of particular interest was the examination of the nature of stress singularity near a crack tip terminating at the interface where the shear modulus has a discontinuous derivative. The results show that, unlike the crack problem in piecewise homogeneous materials for which the singularity is of the form r/alpha, 0 less than alpha less than 1, in this problem the stresses have a standard square-root singularity regardless of the location of the crack tip. The nonhomogeneity constant Beta has, however, considerable influence on the stress intensity factors.
The crack problem in bonded nonhomogeneous materials
NASA Technical Reports Server (NTRS)
Erdogan, F.; Joseph, P. F.; Kaya, A. C.
1991-01-01
The plane elasticity problem for two bonded half planes containing a crack perpendicular to the interface was considered. The effect of very steep variations in the material properties near the diffusion plane on the singular behavior of the stresses and stress intensity factors were studied. The two materials were thus, assumed to have the shear moduli mu(o) and mu(o) exp (Beta x), x=0 being the diffusion plane. Of particular interest was the examination of the nature of stress singularity near a crack tip termination at the interface where the shear modulus has a discontinuous derivative. The results show that, unlike the crack problem in piecewise homogeneous materials for which the singularity is of the form r/alpha, 0 less than alpha less than 1, in this problem the stresses have a standard square-root singularity regardless of the location of the crack tip. The nonhomogeneity constant Beta has, however, considerable influence on the stress intensity factors.
NASA Technical Reports Server (NTRS)
Mirels, Harold
1959-01-01
A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
Equilibrium Conformations of Concentric-tube Continuum Robots
Rucker, D. Caleb; Webster, Robert J.; Chirikjian, Gregory S.; Cowan, Noah J.
2013-01-01
Robots consisting of several concentric, preshaped, elastic tubes can work dexterously in narrow, constrained, and/or winding spaces, as are commonly found in minimally invasive surgery. Previous models of these “active cannulas” assume piecewise constant precurvature of component tubes and neglect torsion in curved sections of the device. In this paper we develop a new coordinate-free energy formulation that accounts for general preshaping of an arbitrary number of component tubes, and which explicitly includes both bending and torsion throughout the device. We show that previously reported models are special cases of our formulation, and then explore in detail the implications of torsional flexibility for the special case of two tubes. Experiments demonstrate that this framework is more descriptive of physical prototype behavior than previous models; it reduces model prediction error by 82% over the calibrated bending-only model, and 17% over the calibrated transmissional torsion model in a set of experiments. PMID:25125773
Adaptive control and noise suppression by a variable-gain gradient algorithm
NASA Technical Reports Server (NTRS)
Merhav, S. J.; Mehta, R. S.
1987-01-01
An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.
Total Variation Denoising and Support Localization of the Gradient
NASA Astrophysics Data System (ADS)
Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.
2016-10-01
This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.
Shear waves in inhomogeneous, compressible fluids in a gravity field.
Godin, Oleg A
2014-03-01
While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.
Nolte, Guido
2003-11-21
The equation for the magnetic lead field for a given magnetoencephalography (MEG) channel is well known for arbitrary frequencies omega but is not directly applicable to MEG in the quasi-static approximation. In this paper we derive an equation for omega = 0 starting from the very definition of the lead field instead of using Helmholtz's reciprocity theorems. The results are (a) the transpose of the conductivity times the lead field is divergence-free, and (b) the lead field differs from the one in any other volume conductor by a gradient of a scalar function. Consequently, for a piecewise homogeneous and isotropic volume conductor, the lead field is always tangential at the outermost surface. Based on this theoretical result, we formulated a simple and fast method for the MEG forward calculation for one shell of arbitrary shape: we correct the corresponding lead field for a spherical volume conductor by a superposition of basis functions, gradients of harmonic functions constructed here from spherical harmonics, with coefficients fitted to the boundary conditions. The algorithm was tested for a prolate spheroid of realistic shape for which the analytical solution is known. For high order in the expansion, we found the solutions to be essentially exact and for reasonable accuracies much fewer multiplications are needed than in typical implementations of the boundary element methods. The generalization to more shells is straightforward.
A new approach to simulating collisionless dark matter fluids
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom; Kaehler, Ralf
2013-09-01
Recently, we have shown how current cosmological N-body codes already follow the fine grained phase-space information of the dark matter fluid. Using a tetrahedral tessellation of the three-dimensional manifold that describes perfectly cold fluids in six-dimensional phase space, the phase-space distribution function can be followed throughout the simulation. This allows one to project the distribution function into configuration space to obtain highly accurate densities, velocities and velocity dispersions. Here, we exploit this technique to show first steps on how to devise an improved particle-mesh technique. At its heart, the new method thus relies on a piecewise linear approximation of the phase-space distribution function rather than the usual particle discretization. We use pseudo-particles that approximate the masses of the tetrahedral cells up to quadrupolar order as the locations for cloud-in-cell (CIC) deposit instead of the particle locations themselves as in standard CIC deposit. We demonstrate that this modification already gives much improved stability and more accurate dynamics of the collisionless dark matter fluid at high force and low mass resolution. We demonstrate the validity and advantages of this method with various test problems as well as hot/warm dark matter simulations which have been known to exhibit artificial fragmentation. This completely unphysical behaviour is much reduced in the new approach. The current limitations of our approach are discussed in detail and future improvements are outlined.
GPS-PWV Estimation and Analysis for CGPS Sites Operating in Mexico
NASA Astrophysics Data System (ADS)
Gutierrez, O.; Vazquez, G. E.; Bennett, R. A.; Adams, D. K.
2014-12-01
Eighty permanent Global Positioning System (GPS) tracking stations that belong to several networks spanning Mexico intended for diverse purposes and applications were used to estimate precipitable water vapor (PWV) using measurement series covering the period of 2000-2014. We extracted the GPS-PWV from the ionosphere-free double-difference carrier phase observations, processed using the GAMIT software. The GPS data were processed with a 30 s sampling rate, 15-degree cutoff angle, and precise GPS orbits disseminated by IGS. The time-varying part of the zenith wet delay was estimated using the Global Mapping Function (GMF), while the constant part is evaluated using the Neil tropospheric model. The data reduction to compute the zenith wet delay follows the step piecewise linear strategy, which is subsequently transformed to PWV estimated every 2-hr. Although there exist previous isolated studies for estimating PWV in Mexico, this study is an attempt to perform a more complete and comprehensive analysis of PWV estimation throughout the Mexican territory. Our resulting GPS-based PWV were compared to available PWV values for 30 stations that operate in Mexico and report the PWV to Suominet. This comparison revealed differences of 1 to 2 mm between the GPS-PWV solution and the PWV reported by Suominet. Accurate values of GPS-PWV will help enhance Mexico ability to investigate water vapor advection, convective and frontal rainfall and long-term climate variability.
NASA Astrophysics Data System (ADS)
Arce, J. C.; Perdomo-Ortiz, A.; Zambrano, M. L.; Mujica-Martínez, C.
2011-03-01
A conceptually appealing and computationally economical course-grained molecular-orbital (MO) theory for extended quasilinear molecular heterostructures is presented. The formalism, which is based on a straightforward adaptation, by including explicitly the vacuum, of the envelope-function approximation widely employed in solid-state physics leads to a mapping of the three-dimensional single-particle eigenvalue equations into simple one-dimensional hole and electron Schrödinger-like equations with piecewise-constant effective potentials and masses. The eigenfunctions of these equations are envelope MO's in which the short-wavelength oscillations present in the full MO's, associated with the atomistic details of the molecular potential, are smoothed out automatically. The approach is illustrated by calculating the envelope MO's of high-lying occupied and low-lying virtual π states in prototypical nanometric heterostructures constituted by oligomers of polyacetylene and polydiacetylene. Comparison with atomistic electronic-structure calculations reveals that the envelope-MO energies agree very well with the energies of the π MO's and that the envelope MO's describe precisely the long-wavelength variations of the π MO's. This envelope MO theory, which is generalizable to extended systems of any dimensionality, is seen to provide a useful tool for the qualitative interpretation and quantitative prediction of the single-particle quantum states in mesoscopic molecular structures and the design of nanometric molecular devices with tailored energy levels and wavefunctions.
NASA Astrophysics Data System (ADS)
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-01
Photon counting x-ray detectors (PCXDs) offer several advantages compared to standard energy-integrating x-ray detectors, but also face significant challenges. One key challenge is the high count rates required in CT. At high count rates, PCXDs exhibit count rate loss and show reduced detective quantum efficiency in signal-rich (or high flux) measurements. In order to reduce count rate requirements, a dynamic beam-shaping filter can be used to redistribute flux incident on the patient. We study the piecewise-linear attenuator in conjunction with PCXDs without energy discrimination capabilities. We examined three detector models: the classic nonparalyzable and paralyzable detector models, and a ‘hybrid’ detector model which is a weighted average of the two which approximates an existing, real detector (Taguchi et al 2011 Med. Phys. 38 1089-102 ). We derive analytic expressions for the variance of the CT measurements for these detectors. These expressions are used with raw data estimated from DICOM image files of an abdomen and a thorax to estimate variance in reconstructed images for both the dynamic attenuator and a static beam-shaping (‘bowtie’) filter. By redistributing flux, the dynamic attenuator reduces dose by 40% without increasing peak variance for the ideal detector. For non-ideal PCXDs, the impact of count rate loss is also reduced. The nonparalyzable detector shows little impact from count rate loss, but with the paralyzable model, count rate loss leads to noise streaks that can be controlled with the dynamic attenuator. With the hybrid model, the characteristic count rates required before noise streaks dominate the reconstruction are reduced by a factor of 2 to 3. We conclude that the piecewise-linear attenuator can reduce the count rate requirements of the PCXD in addition to improving dose efficiency. The magnitude of this reduction depends on the detector, with paralyzable detectors showing much greater benefit than nonparalyzable detectors.
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
Geometric constrained variational calculus. II: The second variation (Part I)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
2012-12-01
acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Investigation to advance prediction techniques of the low-speed aerodynamics of V/STOL aircraft
NASA Technical Reports Server (NTRS)
Maskew, B.; Strash, D.; Nathman, J.; Dvorak, F. A.
1985-01-01
A computer program, VSAERO, has been applied to a number of V/STOL configurations with a view to advancing prediction techniques for the low-speed aerodynamic characteristics. The program couples a low-order panel method with surface streamline calculation and integral boundary layer procedures. The panel method--which uses piecewise constant source and doublet panels-includes an iterative procedure for wake shape and models boundary layer displacement effect using the source transpiration technique. Certain improvements to a basic vortex tube jet model were installed in the code prior to evaluation. Very promising results were obtained for surface pressures near a jet issuing at 90 deg from a flat plate. A solid core model was used in the initial part of the jet with a simple entrainment model. Preliminary representation of the downstream separation zone significantly improve the correlation. The program accurately predicted the pressure distribution inside the inlet on the Grumman 698-411 design at a range of flight conditions. Furthermore, coupled viscous/potential flow calculations gave very close correlation with experimentally determined operational boundaries dictated by the onset of separation inside the inlet. Experimentally observed degradation of these operational boundaries between nacelle-alone tests and tests on the full configuration were also indicated by the calculation. Application of the program to the General Dynamics STOL fighter design were equally encouraging. Very close agreement was observed between experiment and calculation for the effects of power on pressure distribution, lift and lift curve slope.
Slowly-rotating neutron stars in massive bigravity
NASA Astrophysics Data System (ADS)
Sullivan, A.; Yunes, N.
2018-02-01
We study slowly-rotating neutron stars in ghost-free massive bigravity. This theory modifies general relativity by introducing a second, auxiliary but dynamical tensor field that couples to matter through the physical metric tensor through non-linear interactions. We expand the field equations to linear order in slow rotation and numerically construct solutions in the interior and exterior of the star with a set of realistic equations of state. We calculate the physical mass function with respect to observer radius and find that, unlike in general relativity, this function does not remain constant outside the star; rather, it asymptotes to a constant a distance away from the surface, whose magnitude is controlled by the ratio of gravitational constants. The Vainshtein-like radius at which the physical and auxiliary mass functions asymptote to a constant is controlled by the graviton mass scaling parameter, and outside this radius, bigravity modifications are suppressed. We also calculate the frame-dragging metric function and find that bigravity modifications are typically small in the entire range of coupling parameters explored. We finally calculate both the mass-radius and the moment of inertia-mass relations for a wide range of coupling parameters and find that both the graviton mass scaling parameter and the ratio of the gravitational constants introduce large modifications to both. These results could be used to place future constraints on bigravity with electromagnetic and gravitational-wave observations of isolated and binary neutron stars.
A pitfall of piecewise-polytropic equation of state inference
NASA Astrophysics Data System (ADS)
Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.
2018-05-01
The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.
Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei
2013-08-01
Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.
Frustration in protein elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Bahar, Ivet
2010-03-01
Elastic network models (ENMs) are widely used for studying the equilibrium dynamics of proteins. The most common approach in ENM analysis is to adopt a uniform force constant or a non-specific distance dependent function to represent the force constant strength. Here we discuss the influence of sequence and structure in determining the effective force constants between residues in ENMs. Using a novel method based on entropy maximization, we optimize the force constants such that they exactly reporduce a subset of experimentally determined pair covariances for a set of proteins. We analyze the optimized force constants in terms of amino acid types, distances, contact order and secondary structure, and we demonstrate that including frustrated interactions in the ENM is essential for accurately reproducing the global modes in the middle of the frequency spectrum.
Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source
NASA Astrophysics Data System (ADS)
Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.
2017-11-01
Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
Research on transient thermal process of a friction brake during repetitive cycles of operation
NASA Astrophysics Data System (ADS)
Slavchev, Yanko; Dimitrov, Lubomir; Dimitrov, Yavor
2017-12-01
Simplified models are used in the classical engineering analyses of the friction brake heating temperature during repetitive cycles of operation to determine basically the maximum and minimum brake temperatures. The objective of the present work is to broaden and complement the possibilities for research through a model that is based on the classical scheme of the Newton's law of cooling and improves the studies by adding a disturbance function for a corresponding braking process. A general case of braking in non-periodic repetitive mode is considered, for which a piecewise function is defined to apply pulse thermal loads to the system. Cases with rectangular and triangular waveforms are presented. Periodic repetitive braking process is also studied using a periodic rectangular waveform until a steady thermal state is achieved. Different numerical methods such as the Euler's method, the classical fourth order Runge-Kutta (RK4) and the Runge-Kutta-Fehlberg 4-5 (RKF45) are used to solve the non-linear differential equation of the model. The constructed model allows during pre-engineering calculations to be determined effectively the time for reaching the steady thermal state of the brake, to be simulated actual braking modes in vehicles and material handling machines, and to be accounted for the thermal impact when performing fatigue calculations.