A discrete fractional random transform
NASA Astrophysics Data System (ADS)
Liu, Zhengjun; Zhao, Haifa; Liu, Shutian
2005-11-01
We propose a discrete fractional random transform based on a generalization of the discrete fractional Fourier transform with an intrinsic randomness. Such discrete fractional random transform inheres excellent mathematical properties of the fractional Fourier transform along with some fantastic features of its own. As a primary application, the discrete fractional random transform has been used for image encryption and decryption.
Time to the Doctorate: Multilevel Discrete-Time Hazard Analysis
ERIC Educational Resources Information Center
Wao, Hesborn O.
2010-01-01
Secondary data on 1,028 graduate students nested within 24 programs and admitted into either a Ph. D. or Ed. D. program between 1990 and 2006 at an American public university were used to illustrate the benefits of employing multilevel discrete-time hazard analysis in understanding the timing of doctorate completion in Education and the factors…
Discrete fluorescent saturation regimes in multilevel systems
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1988-01-01
Using models of multilevel atoms, the fluorescent process was examined for the ratio of the photooxidation rate, Pij, to the collisional oxidation rate, Cij, in the pumped resonance transition i-j. It is shown that, in the full range of the parameter Pij/Cij, there exist three distinct regimes (I, II, and III) which may be usefully exploited. These regimes are defined, respectively, by the following conditions: Pij/Cij smaller than about 1; Pij/Cij much greater than 1 and Pij much lower than Cki; and Pij/Cij much greater than 1 and Pij much higher than Cki, where Cki is the collisional rate populating the source level i. The only regime which is characterized by the sensitivity of fluorescent-fluorescent line intensity ratios to Pij is regime I. If regime III is reached, even fluorescent-nonfluorescent line ratios become independent of Pij. The analysis is applied to the resonant photoexcitation of a carbonlike ion.
Handling Correlations between Covariates and Random Slopes in Multilevel Models
ERIC Educational Resources Information Center
Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…
Multilevel Analysis Methods for Partially Nested Cluster Randomized Trials
ERIC Educational Resources Information Center
Sanders, Elizabeth A.
2011-01-01
This paper explores multilevel modeling approaches for 2-group randomized experiments in which a treatment condition involving clusters of individuals is compared to a control condition involving only ungrouped individuals, otherwise known as partially nested cluster randomized designs (PNCRTs). Strategies for comparing groups from a PNCRT in the…
ERIC Educational Resources Information Center
Zhu, Xiaoshu
2013-01-01
The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…
Randomness and multilevel interactions in biology.
Buiatti, Marcello; Longo, Giuseppe
2013-09-01
The dynamic instability of living systems and the "superposition" of different forms of randomness are viewed, in this paper, as components of the contingently changing, or even increasing, organization of life through ontogenesis or evolution. To this purpose, we first survey how classical and quantum physics define randomness differently. We then discuss why this requires, in our view, an enriched understanding of the effects of their concurrent presence in biological systems' dynamics. Biological randomness is then presented not only as an essential component of the heterogeneous determination and intrinsic unpredictability proper to life phenomena, due to the nesting of, and interaction between many levels of organization, but also as a key component of its structural stability. We will note as well that increasing organization, while increasing "order", induces growing disorder, not only by energy dispersal effects, but also by increasing variability and differentiation. Finally, we discuss the cooperation between diverse components in biological networks; this cooperation implies the presence of constraints due to the particular nature of bio-entanglement and bio-resonance, two notions to be reviewed and defined in the paper. PMID:23637008
Multilevel, discrete, point-interval data can predict bioattenuation`s potential
Kabis, T.W.
1996-05-01
Closely spaced, discrete-interval groundwater sampling is critical for monitoring aqueous-phase contaminants at hazardous waste sites. Data obtained from discrete-interval sampling devices accurately represent horizontal and vertical extants of contaminant plumes. Discrete point-interval groundwater sampling has been tested in various applications, including natural- and forced-gradient tracer tests plume delineation, and in identifying discrete zones of microbial activity and vertical chemical gradients within an aquifer. An increasingly popular strategy is to avoid active remediation in favor of natural, in-situ attenuation processes. Evaluation of natural attenuation, particularly in-situ bioremediation resulting from multiple terminal electron acceptors, requires site-specific data. Here, multilevel, discrete, point-interval groundwater sampling data is critical. Because groundwater samples from standard monitoring wells often are derived from large vertical sampling zones, the resulting data presents a smeared picture of chemical-microbial conditions; moreover, the potential for natural in-situ bioattenuation can be under- or overestimated without multilevel, discrete, point-interval data. Discrete, point-interval samplers provide a sound basis for evaluating remediation options. One potential obstacle has been the lack of a multipurpose, cost-effective sampler that is operational under a variety of field conditions.
Multilevel models for survival analysis with random effects.
Yau, K K
2001-03-01
A method for modeling survival data with multilevel clustering is described. The Cox partial likelihood is incorporated into the generalized linear mixed model (GLMM) methodology. Parameter estimation is achieved by maximizing a log likelihood analogous to the likelihood associated with the best linear unbiased prediction (BLUP) at the initial step of estimation and is extended to obtain residual maximum likelihood (REML) estimators of the variance component. Estimating equations for a three-level hierarchical survival model are developed in detail, and such a model is applied to analyze a set of chronic granulomatous disease (CGD) data on recurrent infections as an illustration with both hospital and patient effects being considered as random. Only the latter gives a significant contribution. A simulation study is carried out to evaluate the performance of the REML estimators. Further extension of the estimation procedure to models with an arbitrary number of levels is also discussed. PMID:11252624
The ergodic decomposition of stationary discrete random processes
NASA Technical Reports Server (NTRS)
Gray, R. M.; Davisson, L. D.
1974-01-01
The ergodic decomposition is discussed, and a version focusing on the structure of individual sample functions of stationary processes is proved for the special case of discrete-time random processes with discrete alphabets. The result is stronger in this case than the usual theorem, and the proof is both intuitive and simple. Estimation-theoretic and information-theoretic interpretations are developed and applied to prove existence theorems for universal source codes, both noiseless and with a fidelity criterion.
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-05-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell-Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of
A discrete time random walk model for anomalous diffusion
NASA Astrophysics Data System (ADS)
Angstmann, C. N.; Donnelly, I. C.; Henry, B. I.; Nichols, J. A.
2015-07-01
The continuous time random walk, introduced in the physics literature by Montroll and Weiss, has been widely used to model anomalous diffusion in external force fields. One of the features of this model is that the governing equations for the evolution of the probability density function, in the diffusion limit, can generally be simplified using fractional calculus. This has in turn led to intensive research efforts over the past decade to develop robust numerical methods for the governing equations, represented as fractional partial differential equations. Here we introduce a discrete time random walk that can also be used to model anomalous diffusion in an external force field. The governing evolution equations for the probability density function share the continuous time random walk diffusion limit. Thus the discrete time random walk provides a novel numerical method for solving anomalous diffusion equations in the diffusion limit, including the fractional Fokker-Planck equation. This method has the clear advantage that the discretisation of the diffusion limit equation, which is necessary for numerical analysis, is itself a well defined physical process. Some examples using the discrete time random walk to provide numerical solutions of the probability density function for anomalous subdiffusion, including forcing, are provided.
Discrete Randomness in Discrete Time Quantum Walk: Study Via Stochastic Averaging
NASA Astrophysics Data System (ADS)
Ellinas, D.; Bracken, A. J.; Smyrnakis, I.
2012-10-01
The role of classical noise in quantum walks (QW) on integers is investigated in the form of discrete dichotomic random variable affecting its reshuffling matrix parametrized as a SU2)/U (1) coset element. Analysis in terms of quantum statistical moments and generating functions, derived by the completely positive trace preserving (CPTP) map governing evolution, reveals a pronounced eventual transition in walk's diffusion mode, from a quantum ballistic regime with rate O(t) to a classical diffusive regime with rate O(√{t}), when condition (strength of noise parameter)2 × (number of steps) = 1, is satisfied. The role of classical randomness is studied showing that the randomized QW, when treated on the stochastic average level by means of an appropriate CPTP averaging map, turns out to be equivalent to a novel quantized classical walk without randomness. This result emphasizes the dual role of quantization/randomization in the context of classical random walk.
Discrete Random Media Techniques for Microwave Modeling of Vegetated Terrain
NASA Technical Reports Server (NTRS)
Lang, R. H.
1984-01-01
Microwave remote sensing of agricultural crops and forested regions is studied. Long term goals of the research involve modeling vegetation so that radar signatures can be used to infer the parameters which characterize the vegetation and underlying ground. Vegetation is modeled by discrete scatterers viz, leaves, stems, branches and trunks. These are replaced by glossy dielectric discs and cylinders. Rough surfaces are represented by their mean and spectral characteristics. Average scattered power is then calculated by employing discrete random media methodology such as the distorted Born approximation or transport theory. Both coherent and incoherent multiple scattering techniques are explored. Once direct methods are developed, inversion techniques can be investigated.
Asymptotic Effect of Misspecification in the Random Part of the Multilevel Model
ERIC Educational Resources Information Center
Berkhof, Johannes; Kampen, Jarl Kennard
2004-01-01
The authors examine the asymptotic effect of omitting a random coefficient in the multilevel model and derive expressions for the change in (a) the variance components estimator and (b) the estimated variance of the fixed effects estimator. They apply the method of moments, which yields a closed form expression for the omission effect. In…
ERIC Educational Resources Information Center
Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.
2006-01-01
The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…
A discrete impulsive model for random heating and Brownian motion
NASA Astrophysics Data System (ADS)
Ramshaw, John D.
2010-01-01
The energy of a mechanical system subjected to a random force with zero mean increases irreversibly and diverges with time in the absence of friction or dissipation. This random heating effect is usually encountered in phenomenological theories formulated in terms of stochastic differential equations, the epitome of which is the Langevin equation of Brownian motion. We discuss a simple discrete impulsive model that captures the essence of random heating and Brownian motion. The model may be regarded as a discrete analog of the Langevin equation, although it is developed ab initio. Its analysis requires only simple algebraic manipulations and elementary averaging concepts, but no stochastic differential equations (or even calculus). The irreversibility in the model is shown to be a consequence of a natural causal stochastic condition that is closely analogous to Boltzmann's molecular chaos hypothesis in the kinetic theory of gases. The model provides a simple introduction to several ostensibly more advanced topics, including random heating, molecular chaos, irreversibility, Brownian motion, the Langevin equation, and fluctuation-dissipation theorems.
Multilevel Cell Storage and Resistance Variability in Resistive Random Access Memory
NASA Astrophysics Data System (ADS)
Pantelis, D. I.; Karakizis, P. N.; Dragatogiannis, D. A.; Charitidis, C. A.
2016-06-01
Multilevel per cell (MLC) storage in resistive random access memory (ReRAM) is attractive in achieving high-density and low-cost memory and will be required in future. In this chapter, MLC storage and resistance variability and reliability of multilevel in ReRAM are discussed. Different MLC operation schemes with their physical mechanisms and a comprehensive analysis of resistance variability have been provided. Various factors that can induce variability and their effect on the resistance margin between the multiple resistance levels are assessed. The reliability characteristics and the impact on MLC storage have also been assessed.
Weighted discrete least-squares polynomial approximation using randomized quadratures
NASA Astrophysics Data System (ADS)
Zhou, Tao; Narayan, Akil; Xiu, Dongbin
2015-10-01
We discuss the problem of polynomial approximation of multivariate functions using discrete least squares collocation. The problem stems from uncertainty quantification (UQ), where the independent variables of the functions are random variables with specified probability measure. We propose to construct the least squares approximation on points randomly and uniformly sampled from tensor product Gaussian quadrature points. We analyze the stability properties of this method and prove that the method is asymptotically stable, provided that the number of points scales linearly (up to a logarithmic factor) with the cardinality of the polynomial space. Specific results in both bounded and unbounded domains are obtained, along with a convergence result for Chebyshev measure. Numerical examples are provided to verify the theoretical results.
Rosvall, Martin; Bergstrom, Carl T.
2011-01-01
To comprehend the hierarchical organization of large integrated systems, we introduce the hierarchical map equation, which reveals multilevel structures in networks. In this information-theoretic approach, we exploit the duality between compression and pattern detection; by compressing a description of a random walker as a proxy for real flow on a network, we find regularities in the network that induce this system-wide flow. Finding the shortest multilevel description of the random walker therefore gives us the best hierarchical clustering of the network — the optimal number of levels and modular partition at each level — with respect to the dynamics on the network. With a novel search algorithm, we extract and illustrate the rich multilevel organization of several large social and biological networks. For example, from the global air traffic network we uncover countries and continents, and from the pattern of scientific communication we reveal more than 100 scientific fields organized in four major disciplines: life sciences, physical sciences, ecology and earth sciences, and social sciences. In general, we find shallow hierarchical structures in globally interconnected systems, such as neural networks, and rich multilevel organizations in systems with highly separated regions, such as road networks. PMID:21494658
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander
2016-06-01
Multiple imputation (MI) has become one of the main procedures used to treat missing data, but the guidelines from the methodological literature are not easily transferred to multilevel research. For models including random slopes, proper MI can be difficult, especially when the covariate values are partially missing. In the present article, we discuss applications of MI in multilevel random-coefficient models, theoretical challenges posed by slope variation, and the current limitations of standard MI software. Our findings from three simulation studies suggest that (a) MI is able to recover most parameters, but is currently not well suited to capture slope variation entirely when covariate values are missing; (b) MI offers reasonable estimates for most parameters, even in smaller samples or when its assumptions are not met; and PMID:25939979
Genome mapping by random anchoring: A discrete theoretical analysis
NASA Astrophysics Data System (ADS)
Zhang, M. Q.; Marr, T. G.
1993-11-01
As a part of the international human genome project, large-scale genomic maps of human and other model organisms are being generated. More recently, mapping using various anchoring (as opposed to the traditional "fingerprinting") strategies have been proposed based largely on mathematical models. In all of the theoretical work dealing with anchoring, an anchor has been idealized as a point on a continuous, infinite-length genome. In general, it is not desirable to make these assumptions, since in practice they may be violated under a variety of actual biological situations. Here we analyze a discrete model that can be used to predict the expected progress made when mapping by random anchoring. By virtue of keeping all three length scales (genome length, clone length, and probe length) finite, our results for the random anchoring strategy are derived in full generality, which contain previous results as special cases and hence can have broad application for planning mapping experiments or assessing the accuracy of the continuum models. Finally, we pose a challenging nonrandom anchoring model corresponding to a more efficient mapping scheme.
Multibeam Approach to Pulse Scattering from Discrete Random Media.
NASA Astrophysics Data System (ADS)
Kilic, Ozlem
The problem of a pulsed aperture illuminating a two dimensional layer of discrete random medium over a flat, homogenous background is considered. The layer consists of sparsely distributed dielectric scatterers that are randomly oriented in space. The behavior of the backscattered signal from the medium is examined in the time domain for the case of a short pulse incidence. The excitation of the antenna is assumed to be arbitrary, and is represented as a discrete sum of shifted and tilted Gaussian beams by using Gabor expansion. The expansion is exact, and with the appropriate choice of beam parameters, the radiation pattern can be matched by considering only a few beams. The beams in the expansion represent the main and side lobes of the antenna, making it possible to examine the individual effects of each lobe on the backscattered signature. The received power at the antenna is composed of a coherent and an incoherent part associated respectively with the mean and the fluctuating parts of the scattered fields from the medium. The coherent term is observed as a sharp peak, while the incoherent term builds up and decays more slowly. The incoherent response is dominated by three terms: direct, direct reflected and reflected. The direct term is associated with the volume scattering and arrives first at the antenna. The second term observed at the antenna is the direct reflected component, which results from a single interaction between the scatterer and the ground. Depending on the attenuation inside the medium, this term can be the strongest return at the antenna. The reflected term which involves a double bounce from the ground arrives last due to the relatively longer path it travels. Using the multibeam representation, it is possible to examine individual returns from the side and main lobes of the antenna radiation pattern. The interference of the antenna lobes with each other can also be formulated via the multibeam representation. Strong responses from the
ERIC Educational Resources Information Center
Petras, Hanno; Masyn, Katherine E.; Buckley, Jacquelyn A.; Ialongo, Nicholas S.; Kellam, Sheppard
2011-01-01
The focus of this study was to prospectively investigate the effect of aggressive behavior and of classroom behavioral context, as measured in the fall of 1st grade, on the timing of 1st school removal across Grades 1-7 in a sample of predominately urban minority youths from Baltimore, Maryland. Using a multilevel discrete-time survival framework,…
Emergence of multilevel selection in the prisoner's dilemma game on coevolving random networks
NASA Astrophysics Data System (ADS)
Szolnoki, Attila; Perc, Matjaž
2009-09-01
We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Jin, Kuan-Yu
2010-01-01
In this study, all the advantages of slope parameters, random weights, and latent regression are acknowledged when dealing with component and composite items by adding slope parameters and random weights into the standard item response model with internal restrictions on item difficulty and formulating this new model within a multilevel framework…
ERIC Educational Resources Information Center
Candel, Math J. J. M.; Winkens, Bjorn
2003-01-01
Multilevel analysis is a useful technique for analyzing longitudinal data. To describe a person's development across time, the quality of the estimates of the random coefficients, which relate time to individual changes in a relevant dependent variable, is of importance. The present study compares three estimators of the random coefficients: the…
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Yang, Jianping; Tan, Changfa; Pan, Shumin; Zhou, Zhihong
2015-11-01
A new discrete fractional random transform based on two circular matrices is designed and a novel double-image encryption-compression scheme is proposed by combining compressive sensing with discrete fractional random transform. The two random circular matrices and the measurement matrix utilized in compressive sensing are constructed by using a two-dimensional sine Logistic modulation map. Two original images can be compressed, encrypted with compressive sensing and connected into one image. The resulting image is re-encrypted by Arnold transform and the discrete fractional random transform. Simulation results and security analysis demonstrate the validity and security of the scheme.
Self-consistent quasiparticle random-phase approximation for a multilevel pairing model
Hung, N. Quang; Dang, N. Dinh
2007-11-15
Particle-number projection within the Lipkin-Nogami (LN) method is applied to the self-consistent quasiparticle random-phase approximation (SCQRPA), which is tested in an exactly solvable multilevel pairing model. The SCQRPA equations are numerically solved to find the energies of the ground and excited states at various numbers {omega} of doubly degenerate equidistant levels. The use of the LN method allows one to avoid the collapse of the BCS (QRPA) to obtain the energies of the ground and excited states as smooth functions of the interaction parameter G. The comparison between results given by different approximations such as the SCRPA, QRPA, LNQRPA, SCQRPA, and LNSCQRPA is carried out. Although the use of the LN method significantly improves the agreement with the exact results in the intermediate coupling region, we found that in the strong coupling region the SCQRPA results are closest to the exact ones.
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
One-dimensional random field Ising model and discrete stochastic mappings
Behn, U.; Zagrebnov, V.A.
1987-06-01
Previous results relating the one-dimensional random field Ising model to a discrete stochastic mapping are generalized to a two-valued correlated random (Markovian) field and to the case of zero temperature. The fractal dimension of the support of the invariant measure is calculated in a simple approximation and its dependence on the physical parameters is discussed.
Random sequential adsorption of polydisperse mixtures on discrete substrates.
Budinski-Petković, Lj; Vrhovac, S B; Loncarević, I
2008-12-01
We study random sequential adsorption of polydisperse mixtures of extended objects both on a triangular and on a square lattice. The depositing objects are formed by self-avoiding random walks on two-dimensional lattices. Numerical simulations were performed to determine the influence of the number of mixture components and length of the shapes making the mixture on the kinetics of the deposition process. We find that the late stage deposition kinetics follows an exponential law theta(t) approximately theta_{jam}-Aexp(-tsigma) not only for the whole mixture, but also for the individual components. We discuss in detail how the quantities such as jamming coverage theta_{jam} and the relaxation time sigma depend on the mixture composition. Our results suggest that the order of symmetry axis of the shape may exert a decisive influence on adsorption kinetics of each mixture component. PMID:19256849
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
NASA Astrophysics Data System (ADS)
Müller, Florian; Jenny, Patrick; Daniel, Meyer
2014-05-01
To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.
NASA Astrophysics Data System (ADS)
Wang, Ping; Yuan, Hongwu; Mei, Haiping; Zhang, Qianghua
2013-08-01
Study the laser pulses transmission time characteristics in discrete random medium using the Monte Carlo method. Firstly, the medium optical parameters have been given by OPAC software. Then, create a Monte Carlo model and Monte Carlo simulation of photon transport behavior of a large number of tracking, statistics obtain the photon average arrival time and average pulse broadening case, the calculation result with calculation results of two-frequency mutual coherence function are compared, the results are very consistent. Finally, medium impulse response function given by polynomial fitting method can be used to correct discrete random medium inter-symbol interference in optical communications and reduce the rate of system error.
Generation of Random Particle Packings for Discrete Element Models
NASA Astrophysics Data System (ADS)
Abe, S.; Weatherley, D.; Ayton, T.
2012-04-01
An important step in the setup process of Discrete Element Model (DEM) simulations is the generation of a suitable particle packing. There are quite a number of properties such a granular material specimen should ideally have, such as high coordination number, isotropy, the ability to fill arbitrary bounding volumes and the absence of locked-in stresses. An algorithm which is able to produce specimens fulfilling these requirements is the insertion based sphere packing algorithm originally proposed by Place and Mora, 2001 [2] and extended in this work. The algorithm works in two stages. First a number of "seed" spheres are inserted into the bounding volume. In the second stage the gaps between the "seed" spheres are filled by inserting new spheres in a way so they have D+1 (i.e. 3 in 2D, 4 in 3D) touching contacts with either other spheres or the boundaries of the enclosing volume. Here we present an implementation of the algorithm and a systematic statistical analysis of the generated sphere packings. The analysis of the particle radius distribution shows that they follow a power-law with an exponent ≈ D (i.e. ≈3 for a 3D packing and ≈2 for 2D). Although the algorithm intrinsically guarantees coordination numbers of at least 4 in 3D and 3 in 2D, the coordination numbers realized in the generated packings can be significantly higher, reaching beyond 50 if the range of particle radii is sufficiently large. Even for relatively small ranges of particle sizes (e.g. Rmin = 0.5Rmax) the maximum coordination number may exceed 10. The degree of isotropy of the generated sphere packing is also analysed in both 2D and 3D, by measuring the distribution of orientations of vectors joining the centres of adjacent particles. If the range of particle sizes is small, the packing algorithm yields moderate anisotropy approaching that expected for a face-centred cubic packing of equal-sized particles. However, once Rmin < 0.3Rmax a very high degree of isotropy is demonstrated in
ERIC Educational Resources Information Center
Safarkhani, Maryam; Moerbeek, Mirjam
2013-01-01
In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…
NASA Astrophysics Data System (ADS)
Mishra, S.; Schwab, Ch.; Šukys, J.
2016-05-01
We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.
Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks
Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.
2013-07-01
Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.
Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks
Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.
2013-07-01
Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less
NASA Astrophysics Data System (ADS)
Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong
2016-07-01
This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.
Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2011-01-01
The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.
ERIC Educational Resources Information Center
Morgan, Paul L.; Sideridis, Georgios D.
2006-01-01
This study had two purposes. First, we sought to compare the overall effectiveness of different types of fluency interventions for students with learning disabilities (LD). Second, we attempted to identify how individual- and class-level characteristics moderated each intervention's effectiveness. We used multilevel random coefficient modeling to…
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
NASA Astrophysics Data System (ADS)
Cui, Z.-W.; Han, Y.-P.; Li, C.-Y.
2012-05-01
A hybrid finite element-boundary integral-characteristic basis function method (FE-BI-CBFM) is proposed for an efficient simulation of electromagnetic scattering by random discrete particles. Specifically, the finite element method (FEM) is used to obtain the solution of the vector wave equation inside each particle and the boundary integral equation (BIE) using Green's functions is applied on the surfaces of all the particles as a global boundary condition. The coupling system of equations is solved by employing the characteristic basis function method (CBFM) based on the use of macro-basis functions constructed according to the Foldy-Lax multiple scattering equations. Due to the flexibility of FEM, the proposed hybrid technique can easily deal with the problems of multiple scattering by randomly distributed inhomogeneous particles that are often beyond the scope of traditional numerical methods. Some numerical examples are presented to demonstrate the validity and capability of the proposed method.
Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay
Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Nakamori, S.
2008-11-06
This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use, a filtering algorithm based on linear approximations of the real observations is proposed.
Dynamical Localization for Discrete and Continuous Random Schrödinger Operators
NASA Astrophysics Data System (ADS)
Germinet, F.; De Bièvre, S.
We show for a large class of random Schrödinger operators Ho on and on that dynamical localization holds, i.e. that, with probability one, for a suitable energy interval I and for q a positive real,
Discrete-time systems with random switches: From systems stability to networks synchronization.
Guo, Yao; Lin, Wei; Ho, Daniel W C
2016-03-01
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks. PMID:27036191
Discrete-time systems with random switches: From systems stability to networks synchronization
NASA Astrophysics Data System (ADS)
Guo, Yao; Lin, Wei; Ho, Daniel W. C.
2016-03-01
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.
ERIC Educational Resources Information Center
Laurenceau, Jean-Philippe; Stanley, Scott M.; Olmos-Gallo, Antonio; Baucom, Brian; Markham, Howard J.
2004-01-01
This study is a cluster randomized controlled trial of the Prevention and Relationship Enhancement Program (PREP; H. J. Markman, S. M. Stanley, & S. L. Blumberg, 2001). Fifty-seven religious organizations (ROs), consisting of 217 newlywed couples, were randomly assigned to 1 of 3 intervention conditions: PREP delivered by university clinicians…
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
Feng, Zhixin; Jones, Kelvyn; Wang, Wenfei Winnie
2015-01-01
This study undertakes a survival analysis of elderly persons in China using Chinese Longitudinal Healthy Longevity Survey 2002–2008. Employing discrete-time multilevel models, we explored the effect of social support on the survival of elderly people in China. This study focuses on objective (living arrangements and received support) and subjective activities (perceived support) of social support, finding that the effect of different activities of social support on the survival of elderly people varies according to the availability of different support resources. Specifically, living with a spouse, financial independence, perceiving care support from any resource is associated with higher survival rates for elderly people. Separate analysis focusing on urban elderly and rural elderly revealed broadly similar results. There is a larger difference between those perceiving care support from family or social service and not perceiving care support in urban areas comparing to those in rural areas. Those who cannot pay medical expenses are the least likely to survive. The higher level of economic development in province has no significant effect on the survival of elderly people for the whole sample model and the elderly people in urban areas; however, there is a negative influence on the survival of the rural elderly people. PMID:25703671
NASA Astrophysics Data System (ADS)
Vilanova, Guillermo; Colominas, Ignasi; Gomez, Hector
2014-03-01
The growth of new vascular networks from pre-existing capillaries (angiogenesis) plays a pivotal role in tumor development. Mathematical modeling of tumor-induced angiogenesis may help understand the underlying biology of the process and provide new hypotheses for experimentation. Here, we couple an existing deterministic continuum theory with a discrete random walk, proposing a new model that accounts for chemotactic and haptotactic cellular migration. We propose an efficient numerical method to approximate the solution of the model. The accuracy, stability and effectiveness of our algorithms permitted us to perform large-scale three-dimensional simulations which, in contrast to two-dimensional calculations, show a topological complexity similar to that found in experiments. Finally, we use our model and simulations to investigate the role of haptotaxis and chemotaxis in the mobility of tip endothelial cells and its influence in the final vascular patterns.
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
Multilevel ensemble Kalman filtering
Hoel, Hakon; Law, Kody J. H.; Tempone, Raul
2016-06-14
This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Using Multilevel Mixtures to Evaluate Intervention Effects in Group Randomized Trials
ERIC Educational Resources Information Center
Van Horn, M. Lee; Fagan, Abigail A.; Jaki, Thomas; Brown, Eric C.; Hawkins, J. David; Arthur, Michael W.; Abbott, Robert D.; Catalano, Richard F.
2008-01-01
There is evidence to suggest that the effects of behavioral interventions may be limited to specific types of individuals, but methods for evaluating such outcomes have not been fully developed. This study proposes the use of finite mixture models to evaluate whether interventions, and, specifically, group randomized trials, impact participants…
Direct Simulation of Multiple Scattering by Discrete Random Media Illuminated by Gaussian Beams
NASA Technical Reports Server (NTRS)
Mackowski, Daniel W.; Mishchenko, Michael I.
2011-01-01
The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GB can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.
Exponential H∞ filtering for discrete-time switched neural networks with random delays.
Mathiyalagan, Kalidass; Su, Hongye; Shi, Peng; Sakthivel, Rathinasamy
2015-04-01
This paper addresses the exponential H∞ filtering problem for a class of discrete-time switched neural networks with random time-varying delays. The involved delays are assumed to be randomly time-varying which are characterized by introducing a Bernoulli stochastic variable. Effects of both variation range and distribution probability of the time delays are considered. The nonlinear activation functions are assumed to satisfy the sector conditions. Our aim is to estimate the state by designing a full order filter such that the filter error system is globally exponentially stable with an expected decay rate and a H∞ performance attenuation level. The filter is designed by using a piecewise Lyapunov-Krasovskii functional together with linear matrix inequality (LMI) approach and average dwell time method. First, a set of sufficient LMI conditions are established to guarantee the exponential mean-square stability of the augmented system and then the parameters of full-order filter are expressed in terms of solutions to a set of LMI conditions. The proposed LMI conditions can be easily solved by using standard software packages. Finally, numerical examples by means of practical problems are provided to illustrate the effectiveness of the proposed filter design. PMID:25020225
Poynting-Stokes tensor and radiative transfer in discrete random media: the microphysical paradigm.
Mishchenko, Michael I
2010-09-13
This paper solves the long-standing problem of establishing the fundamental physical link between the radiative transfer theory and macroscopic electromagnetics in the case of elastic scattering by a sparse discrete random medium. The radiative transfer equation (RTE) is derived directly from the macroscopic Maxwell equations by computing theoretically the appropriately defined so-called Poynting-Stokes tensor carrying information on both the direction, magnitude, and polarization characteristics of local electromagnetic energy flow. Our derivation from first principles shows that to compute the local Poynting vector averaged over a sufficiently long period of time, one can solve the RTE for the direction-dependent specific intensity column vector and then integrate the direction-weighted specific intensity over all directions. Furthermore, we demonstrate that the specific intensity (or specific intensity column vector) can be measured with a well-collimated radiometer (photopolarimeter), which provides the ultimate physical justification for the use of such instruments in radiation-budget and particle-characterization applications. However, the specific intensity cannot be interpreted in phenomenological terms as signifying the amount of electromagnetic energy transported in a given direction per unit area normal to this direction per unit time per unit solid angle. Also, in the case of a densely packed scattering medium the relation of the measurement with a well-collimated radiometer to the time-averaged local Poynting vector remains uncertain, and the theoretical modeling of this measurement is likely to require a much more complicated approach than solving an RTE. PMID:20940872
Calibration of Discrete Random Walk (DRW) Model via G.I Taylor's Dispersion Theory
NASA Astrophysics Data System (ADS)
Javaherchi, Teymour; Aliseda, Alberto
2012-11-01
Prediction of particle dispersion in turbulent flows is still an important challenge with many applications to environmental, as well as industrial, fluid mechanics. Several models of dispersion have been developed to predict particle trajectories and their relative velocities, in combination with a RANS-based simulation of the background flow. The interaction of the particles with the velocity fluctuations at different turbulent scales represents a significant difficulty in generalizing the models to the wide range of flows where they are used. We focus our attention on the Discrete Random Walk (DRW) model applied to flow in a channel, particularly to the selection of eddies lifetimes as realizations of a Poisson distribution with a mean value proportional to κ / ɛ . We present a general method to determine the constant of this proportionality by matching the DRW model dispersion predictions for fluid element and particle dispersion to G.I Taylor's classical dispersion theory. This model parameter is critical to the magnitude of predicted dispersion. A case study of its influence on sedimentation of suspended particles in a tidal channel with an array of Marine Hydrokinetic (MHK) turbines highlights the dependency of results on this time scale parameter. Support from US DOE through the Northwest National Marine Renewable Energy Center, a UW-OSU partnership.
Thomas, D.L.; Johnson, D.; Griffith, B.
2006-01-01
Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a
Karimaghaloo, Zahra; Arnold, Douglas L; Arbel, Tal
2016-01-01
Detection and segmentation of large structures in an image or within a region of interest have received great attention in the medical image processing domains. However, the problem of small pathology detection and segmentation still remains an unresolved challenge due to the small size of these pathologies, their low contrast and variable position, shape and texture. In many contexts, early detection of these pathologies is critical in diagnosis and assessing the outcome of treatment. In this paper, we propose a probabilistic Adaptive Multi-level Conditional Random Fields (AMCRF) with the incorporation of higher order cliques for detecting and segmenting such pathologies. In the first level of our graphical model, a voxel-based CRF is used to identify candidate lesions. In the second level, in order to further remove falsely detected regions, a new CRF is developed that incorporates higher order textural features, which are invariant to rotation and local intensity distortions. At this level, higher order textures are considered together with the voxel-wise cliques to refine boundaries and is therefore adaptive. The proposed algorithm is tested in the context of detecting enhancing Multiple Sclerosis (MS) lesions in brain MRI, where the problem is further complicated as many of the enhancing voxels are associated with normal structures (i.e. blood vessels) or noise in the MRI. The algorithm is trained and tested on large multi-center clinical trials from Relapsing-Remitting MS patients. The effect of several different parameter learning and inference techniques is further investigated. When tested on 120 cases, the proposed method reaches a lesion detection rate of 90%, with very few false positive lesion counts on average, ranging from 0.17 for very small (3-5 voxels) to 0 for very large (50+ voxels) regions. The proposed model is further tested on a very large clinical trial containing 2770 scans where a high sensitivity of 91% with an average false positive
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2009-08-01
In the one-dimensional optical analog to Anderson localization, a periodically layered medium has one or more parameters randomly disordered. Such a medium can be modeled by an infinite product of 2x2 random transfer matrices with the upper Lyapunov exponent of the matrix product identified as the localization factor (inverse localization length). Furstenberg's integral formula for the Lyapunov exponent requires integration with respect to both the probability measure of the random matrices and the invariant probability measure of the direction of the vector propagated by the random matrix product. This invariant measure is difficult to find analytically, so one of several numerical techniques must be used in its calculation. Here, we focus on one of those techniques, Ulam's method, which sets up a sparse matrix of the probabilities that an entire interval of possible directions will be transferred to some other interval of directions. The left eigenvector of this sparse matrix forms the estimated invariant measure. While Ulam's method is shown to produce results as accurate as others, it suffers from long computation times. The Ulam method, along with other approaches, is demonstrated on a random Fibonacci sequence having a known answer, and on a quarter-wave stack model with discrete disorder in layer thickness.
Vanderbei, Robert J.; P Latin-Small-Letter-Dotless-I nar, Mustafa C.; Bozkaya, Efe B.
2013-02-15
An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problem as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.
Lee, Myoung-Jae; Ahn, Seung-Eon; Lee, Chang Bum; Kim, Chang-Jung; Jeon, Sanghun; Chung, U-In; Yoo, In-Kyeong; Park, Gyeong-Su; Han, Seungwu; Hwang, In Rok; Park, Bae-Ho
2011-11-01
Present charge-based silicon memories are unlikely to reach terabit densities because of scaling limits. As the feature size of memory shrinks to just tens of nanometers, there is insufficient volume available to store charge. Also, process temperatures higher than 800 °C make silicon incompatible with three-dimensional (3D) stacking structures. Here we present a device unit consisting of all NiO storage and switch elements for multilevel terabit nonvolatile random access memory using resistance switching. It is demonstrated that NiO films are scalable to around 30 nm and compatible with multilevel cell technology. The device unit can be a building block for 3D stacking structure because of its simple structure and constituent, high performance, and process temperature lower than 300 °C. Memory resistance switching of NiO storage element is accompanied by an increase in density of grain boundary while threshold resistance switching of NiO switch element is controlled by current flowing through NiO film. PMID:21988144
NASA Astrophysics Data System (ADS)
di Labbio, Giuseppe; Kiyanda, Charles Basenga; Mi, Xiaocheng; Higgins, Andrew Jason; Nikiforakis, Nikolaos; Ng, Hoi Dick
2015-11-01
For a homogeneous reactive medium such as a combustible gaseous mixture, the detonation wave is nearly always observed to propagate at a velocity predicted by the Chapman-Jouguet (CJ) condition. Although the CJ condition was originally formulated for a wave propagating in homogeneous media at constant velocity, it has been posited that this condition may also determine the average detonation velocity in heterogeneous media. This work aims to test the applicability of the CJ condition to heterogeneous media on the one-dimensional reactive Burgers' equation, a tractable analog to the reactive Euler equations, with the reaction governed by an Arrhenius rate law. In this study, heterogeneity is modeled using discrete energy sources, of random energy content, randomly distributed throughout space such that the total energy release is equivalent to that of a homogeneous medium with constant energy density. The equations are solved using a second-order finite volume approach with an exact Riemann solver. The evolution of the discrete detonation is tracked over a long duration and its average propagation velocity is computed. In all cases, the average detonation velocity was found to be in agreement with the velocity predicted by the CJ condition for the equivalent homogeneous system.
NASA Astrophysics Data System (ADS)
Serinaldi, F.
2010-12-01
Discrete multiplicative random cascade (MRC) models were extensively studied and applied to disaggregate rainfall data, thanks to their formal simplicity and the small number of involved parameters. Focusing on temporal disaggregation, the rationale of these models is based on multiplying the value assumed by a physical attribute (e.g., rainfall intensity) at a given time scale L, by a suitable number b of random weights, to obtain b attribute values corresponding to statistically plausible observations at a smaller L/b time resolution. In the original formulation of the MRC models, the random weights were assumed to be independent and identically distributed. However, for several studies this hypothesis did not appear to be realistic for the observed rainfall series as the distribution of the weights was shown to depend on the space-time scale and rainfall intensity. Since these findings contrast with the scale invariance assumption behind the MRC models and impact on the applicability of these models, it is worth studying their nature. This study explores the possible presence of dependence of the parameters of two discrete MRC models on rainfall intensity and time scale, by analyzing point rainfall series with 5-min time resolution. Taking into account a discrete microcanonical (MC) model based on beta distribution and a discrete canonical beta-logstable (BLS), the analysis points out that the relations between the parameters and rainfall intensity across the time scales are detectable and can be modeled by a set of simple functions accounting for the parameter-rainfall intensity relationship, and another set describing the link between the parameters and the time scale. Therefore, MC and BLS models were modified to explicitly account for these relationships and compared with the continuous in scale universal multifractal (CUM) model, which is used as a physically based benchmark model. Monte Carlo simulations point out that the dependence of MC and BLS
Random Graphs Associated to Some Discrete and Continuous Time Preferential Attachment Models
NASA Astrophysics Data System (ADS)
Pachon, Angelica; Polito, Federico; Sacerdote, Laura
2016-03-01
We give a common description of Simon, Barabási-Albert, II-PA and Price growth models, by introducing suitable random graph processes with preferential attachment mechanisms. Through the II-PA model, we prove the conditions for which the asymptotic degree distribution of the Barabási-Albert model coincides with the asymptotic in-degree distribution of the Simon model. Furthermore, we show that when the number of vertices in the Simon model (with parameter α ) goes to infinity, a portion of them behave as a Yule model with parameters (λ ,β ) = (1-α ,1), and through this relation we explain why asymptotic properties of a random vertex in Simon model, coincide with the asymptotic properties of a random genus in Yule model. As a by-product of our analysis, we prove the explicit expression of the in-degree distribution for the II-PA model, given without proof in Newman (Contemp Phys 46:323-351, 2005). References to traditional and recent applications of the these models are also discussed.
Depinning of a discrete elastic string from a two-dimensional random array of weak pinning points
Proville, Laurent
2010-04-15
The present work is essentially concerned with the development of statistical theory for the low temperature dislocation glide in concentrated solid solutions where atom-sized obstacles impede plastic flow. In connection with such a problem, we compute analytically the external force required to drag an elastic string along a discrete two-dimensional square lattice, where some obstacles have been randomly distributed. Some numerical simulations allow us to demonstrate the remarkable agreement between simulations and theory for an obstacle density ranging from 1% to 50% and for lattices with different aspect ratios. The theory proves efficient on the condition that the obstacle-chain interaction remains sufficiently weak compared to the string stiffness.
NASA Technical Reports Server (NTRS)
Zhu, P. Y.
1991-01-01
The effective-medium approximation is applied to investigate scattering from a half-space of randomly and densely distributed discrete scatterers. Starting from vector wave equations, an approximation, called effective-medium Born approximation, a particular way, treating Green's functions, and special coordinates, of which the origin is set at the field point, are used to calculate the bistatic- and back-scatterings. An analytic solution of backscattering with closed form is obtained and it shows a depolarization effect. The theoretical results are in good agreement with the experimental measurements in the cases of snow, multi- and first-year sea-ice. The root product ratio of polarization to depolarization in backscattering is equal to 8; this result constitutes a law about polarized scattering phenomena in the nature.
Liu, Yajuan; Park, Ju H; Guo, Bao-Zhu
2016-07-01
In this paper,the problem of H∞ filtering for a class of nonlinear discrete-time delay systems is investigated. The time delay is assumed to be belonging to a given interval, and the designed filter includes additive gain variations which are supposed to be random and satisfy the Bernoulli distribution. By the augmented Lyapunov functional approach, a sufficient condition is developed to ensure that the filtering error system is asymptotically mean-square stable with a prescribed H∞ performance. In addition, an improved result of H∞ filtering for linear system is also derived. The filter parameters are obtained by solving a set of linear matrix inequalities. For nonlinear systems, the applicability of the developed filtering result is confirmed by a longitudinal flight system, and an additional example for linear system is presented to demonstrate the less conservativeness of the proposed design method. PMID:27157851
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
Reliable H∞ control of discrete-time systems against random intermittent faults
NASA Astrophysics Data System (ADS)
Tao, Yuan; Shen, Dong; Fang, Mengqi; Wang, Youqing
2016-07-01
A passive fault-tolerant control strategy is proposed for systems subject to a novel kind of intermittent fault, which is described by a Bernoulli distributed random variable. Three cases of fault location are considered, namely, sensor fault, actuator fault, and both sensor and actuator faults. The dynamic feedback controllers are designed not only to stabilise the fault-free system, but also to guarantee an acceptable performance of the faulty system. The robust H∞ performance index is used to evaluate the effectiveness of the proposed control scheme. In terms of linear matrix inequality, the sufficient conditions of the existence of controllers are given. An illustrative example indicates the effectiveness of the proposed fault-tolerant control method.
Reverse engineering discrete dynamical systems from data sets with random input vectors.
Just, Winfried
2006-10-01
Recently a new algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is based on methods from computational algebra and finds most parsimonious models for a given data set. We derive mathematically rigorous estimates for the expected amount of data needed by this algorithm to find the correct model. In particular, we demonstrate that for one type of input parameter (graded term orders), the expected data requirements scale polynomially with the number n of chemicals in the network, while for another type of input parameters (randomly chosen lex orders) this number scales exponentially in n. We also show that, for a modification of the algorithm, the expected data requirements scale as the logarithm of n. PMID:17061920
Carlson, Mary; Brennan, Robert T; Earls, Felton
2012-09-01
The potential capacity of children to confront the HIV/AIDS pandemic is rarely considered. Interventions to address the impact of the pandemic on children and adolescents commonly target only their vulnerabilities. We evaluated the Young Citizens Program, an adolescent-centered health promotion curriculum designed to increase self- and collective efficacy through public education and community mobilization across a municipality in the Kilimanjaro Region of Tanzania. The theoretical framework for the program integrates aspects of human capability, communicative action, social ecology and social cognition. The design consists of a cluster randomized-controlled trial (CRCT). Fifteen pairs of matched geopolitically defined neighborhoods of roughly 2000-4000 residents were randomly allocated to treatment and control arms. Within each neighborhood cluster, 24 randomly selected adolescents, ages 9-14, deliberated on topics of social ecology, citizenship, community health and HIV/AIDS competence. Building on their acquired understanding and confidence, they dramatized the scientific basis and social context of HIV infection, testing and treatment in their communities over a 28-week period. The curriculum comprised 5 modules: Group Formation, Understanding our Community, Health and our Community, Making Assessments and Taking Action in our Community and Inter-Acting in our Community. Adolescent participants and adult residents representative of their neighborhoods were surveyed before and after the intervention; data were analyzed using multilevel modeling. In treatment neighborhoods, adolescents increased their deliberative and communicative efficacy and adults showed higher collective efficacy for children. Following the CRCT assessments, the control group received the same curriculum. In the Kilimanjaro Region, the Young Citizens Program is becoming recognized as a structural, health promotion approach through which adolescent self-efficacy and child collective efficacy
Liu, Jung-Tzu; Tsou, Hsiao-Hui; Gordon Lan, K K; Chen, Chi-Tian; Lai, Yi-Hsuan; Chang, Wan-Jung; Tzeng, Chyng-Shyan; Hsiao, Chin-Fu
2016-06-30
In recent years, developing pharmaceutical products via multiregional clinical trials (MRCTs) has become standard. Traditionally, an MRCT would assume that a treatment effect is uniform across regions. However, heterogeneity among regions may have impact upon the evaluation of a medicine's effect. In this study, we consider a random effects model using discrete distribution (DREM) to account for heterogeneous treatment effects across regions for the design and evaluation of MRCTs. We derive an power function for a treatment that is beneficial under DREM and illustrate determination of the overall sample size in an MRCT. We use the concept of consistency based on Method 2 of the Japanese Ministry of Health, Labour, and Welfare's guidance to evaluate the probability for treatment benefit and consistency under DREM. We further derive an optimal sample size allocation over regions to maximize the power for consistency. Moreover, we provide three algorithms for deriving sample size at the desired level of power for benefit and consistency. In practice, regional treatment effects are unknown. Thus, we provide some guidelines on the design of MRCTs with consistency when the regional treatment effect are assumed to fall into a specified interval. Numerical examples are given to illustrate applications of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26833851
NASA Astrophysics Data System (ADS)
Liu, Yisha; Wang, Zidong; Wang, Wei
2011-05-01
This article is concerned with the reliable H ∞ output feedback control problem against actuator failures for a class of uncertain discrete time-delay systems with randomly occurred nonlinearities (RONs). The failures of actuators are quantified by a variable varying in a given interval. RONs are introduced to model a class of sector-like nonlinearities that occur in a probabilistic way according to a Bernoulli distributed white sequence with a known conditional probability. The time-varying delay is unknown with the given lower and upper bounds. Attention is focused on the analysis and design of an output feedback controller such that, for all possible actuator failures, RONs, time-delays as well as admissible parameter uncertainties, the closed-loop system is exponentially mean-square stable and also achieves a prescribed H ∞ performance level. A linear matrix inequality approach is developed to solve the addressed problem. A numerical example is given to demonstrate the effectiveness of the proposed design approach.
Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark
2012-07-01
Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure. PMID:24347839
Raeder, Sabine; Kraft, Pål; Bjørkli, Cato Alexander
2013-01-01
Background Stress is commonly experienced by many people and it is a contributing factor to many mental and physical health conditions, However, few efforts have been made to develop and test the effects of interventions for stress. Objective The aim of this study was to examine the effects of a Web-based stress-reduction intervention on stress, investigate mindfulness and procrastination as potential mediators of any treatment effects, and test whether the intervention is equally effective for females as males, all ages, and all levels of education. Methods We employed a randomized controlled trial in this study. Participants were recruited online via Facebook and randomly assigned to either the stress intervention or a control condition. The Web-based stress intervention was fully automated and consisted of 13 sessions over 1 month. The controls were informed that they would get access to the intervention after the final data collection. Data were collected at baseline and at 1, 2, and 6 months after intervention onset by means of online questionnaires. Outcomes were stress, mindfulness, and procrastination, which were all measured at every measurement occasion. Results A total of 259 participants were included and were allocated to either the stress intervention (n=126) or the control condition (n=133). Participants in the intervention and control group were comparable at baseline; however, results revealed that participants in the stress intervention followed a statistically different (ie, cubic) developmental trajectory in stress levels over time compared to the controls. A growth curve analysis showed that participants in the stress intervention (unstandardized beta coefficient [B]=–3.45, P=.008) recovered more quickly compared to the control group (B=–0.81, P=.34) from baseline to 1 month. Although participants in the stress intervention did show increases in stress levels during the study period (B=2.23, P=.008), long-term stress levels did decrease
A multilevel stochastic collocation method for SPDEs
Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton
2015-03-10
We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
NASA Technical Reports Server (NTRS)
Ippolito, L. J., Jr.
1977-01-01
The multiple scattering effects on wave propagation through a volume of discrete scatterers were investigated. The mean field and intensity for a distribution of scatterers was developed using a discrete random media formulation, and second order series expansions for the mean field and total intensity derived for one-dimensional and three-dimensional configurations. The volume distribution results were shown to proceed directly from the one-dimensional results. The multiple scattering intensity expansion was compared to the classical single scattering intensity and the classical result was found to represent only the first three terms in the total intensity expansion. The Foldy approximation to the mean field was applied to develop the coherent intensity, and was found to exactly represent all coherent terms of the total intensity.
Parallel multilevel preconditioners
Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.
1989-01-01
In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
Robinson, Thomas N.; Matheson, Donna; Desai, Manisha; Wilson, Darrell M.; Weintraub, Dana L.; Haskell, William L.; McClain, Arianna; McClure, Samuel; Banda, Jorge; Sanders, Lee M.; Haydel, K. Farish; Killen, Joel D.
2013-01-01
Objective To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Design Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Participants Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Interventions Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Main Outcome Measure Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. Conclusions The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families
An adaptive multi-level simulation algorithm for stochastic biological systems
NASA Astrophysics Data System (ADS)
Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.
2015-01-01
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
NASA Astrophysics Data System (ADS)
Barnard, J. M.; Augarde, C. E.
2012-12-01
The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.
Multilevel and Diverse Classrooms
ERIC Educational Resources Information Center
Baurain, Bradley, Ed.; Ha, Phan Le, Ed.
2010-01-01
The benefits and advantages of classroom practices incorporating unity-in-diversity and diversity-in-unity are what "Multilevel and Diverse Classrooms" is all about. Multilevel classrooms--also known as mixed-ability or heterogeneous classrooms--are a fact of life in ESOL programs around the world. These classrooms are often not only multilevel…
Multilevel Mixture Factor Models
ERIC Educational Resources Information Center
Varriale, Roberta; Vermunt, Jeroen K.
2012-01-01
Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…
Multilevel modeling in psychosomatic medicine research.
Myers, Nicholas D; Brincks, Ahnalee M; Ames, Allison J; Prado, Guillermo J; Penedo, Frank J; Benedict, Catherine
2012-01-01
The primary purpose of this study is to provide an overview of multilevel modeling for Psychosomatic Medicine readers and contributors. The article begins with a general introduction to multilevel modeling. Multilevel regression modeling at two levels is emphasized because of its prevalence in psychosomatic medicine research. Simulated data sets based on some core ideas from the Familias Unidas effectiveness study are used to illustrate key concepts including communication of model specification, parameter interpretation, sample size and power, and missing data. Input and key output files from Mplus and SAS are provided. A cluster randomized trial with repeated measures (i.e., three-level regression model) is then briefly presented with simulated data based on some core ideas from a cognitive-behavioral stress management intervention in prostate cancer. PMID:23107843
Mediation from Multilevel to Structural Equation Modeling
MacKinnon, David P.; Valente, Matthew J.
2016-01-01
Background/Aims The purpose of this article is to outline multilevel structural equation modeling (MSEM) for mediation analysis of longitudinal data. The introduction of mediating variables can improve experimental and nonexperimental studies of child growth in several ways as discussed throughout this article. Single-mediator individual-level and multilevel mediation models illustrate several current issues in the estimation of mediation with longitudinal data. The strengths of incorporating structural equation modeling (SEM) with multilevel mediation modeling are described. Summary and Key Messages Longitudinal mediation models are pervasive in many areas of research including child growth. Longitudinal mediation models are ideally modeled as repeated measurements clustered within individuals. Further, the combination of MSEM and SEM provides an ideal approach for several reasons, including the ability to assess effects at different levels of analysis, incorporation of measurement error and possible random effects that vary across individuals. PMID:25413658
Multilevel Modeling in Psychosomatic Medicine Research
Myers, Nicholas D.; Brincks, Ahnalee M.; Ames, Allison J.; Prado, Guillermo J.; Penedo, Frank J.; Benedict, Catherine
2012-01-01
The primary purpose of this manuscript is to provide an overview of multilevel modeling for Psychosomatic Medicine readers and contributors. The manuscript begins with a general introduction to multilevel modeling. Multilevel regression modeling at two-levels is emphasized because of its prevalence in psychosomatic medicine research. Simulated datasets based on some core ideas from the Familias Unidas effectiveness study are used to illustrate key concepts including: communication of model specification, parameter interpretation, sample size and power, and missing data. Input and key output files from Mplus and SAS are provided. A cluster randomized trial with repeated measures (i.e., three-level regression model) is then briefly presented with simulated data based on some core ideas from a cognitive behavioral stress management intervention in prostate cancer. PMID:23107843
An adaptive multi-level simulation algorithm for stochastic biological systems
Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
Multilevel filtering elliptic preconditioners
NASA Technical Reports Server (NTRS)
Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles
1989-01-01
A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.
Multilevel resistive information storage and retrieval
Lohn, Andrew; Mickel, Patrick R.
2016-08-09
The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizations of subsurface flow problems.
Methods for testing theory and evaluating impact in randomized field trials
Brown, C. Hendricks; Wang, Wei; Kellam, Sheppard G.; Muthén, Bengt O.; Petras, Hanno; Toyinbo, Peter; Poduska, Jeanne; Ialongo, Nicholas; Wyman, Peter A.; Chamberlain, Patricia; Sloboda, Zili; MacKinnon, David P.; Windham, Amy
2008-01-01
Randomized field trials provide unique opportunities to examine the effectiveness of an intervention in real world settings and to test and extend both theory of etiology and theory of intervention. These trials are designed not only to test for overall intervention impact but also to examine how impact varies as a function of individual level characteristics, context, and across time. Examination of such variation in impact requires analytical methods that take into account the trial’s multiple nested structure and the evolving changes in outcomes over time. The models that we describe here merge multilevel modeling with growth modeling, allowing for variation in impact to be represented through discrete mixtures—growth mixture models—and nonparametric smooth functions—generalized additive mixed models. These methods are part of an emerging class of multilevel growth mixture models, and we illustrate these with models that examine overall impact and variation in impact. In this paper, we define intent-to-treat analyses in group-randomized multilevel field trials and discuss appropriate ways to identify, examine, and test for variation in impact without inflating the Type I error rate. We describe how to make causal inferences more robust to misspecification of covariates in such analyses and how to summarize and present these interactive intervention effects clearly. Practical strategies for reducing model complexity, checking model fit, and handling missing data are discussed using six randomized field trials to show how these methods may be used across trials randomized at different levels. PMID:18215473
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Bobrowski, Adam; Kimmel, Marek
2004-12-01
This is a continuation of the series of articles (C.R. Rao, D.N. Shanbhag (Eds.), Handbook of Statistics 19: Stochastic Processes: Theory and Methods, Elsevier Science, Amsterdam, 2001 (Chapter 8); Math. Biosci. 175 (2002) 83; Math. Meth. Appl. Sci. 26 (2003) 1587; Adv. Appl. Probab. 36 (2004) 57) devoted to a study of the interplay between two of the main forces of population genetics, mutations and drift, in the Fisher-Wright model. We provide discrete-time versions of theorems describing asymptotic behavior of joint distributions of characteristics of a pair of individuals in this model; their continuous-time counterparts were presented in the previous papers. Furthermore, we show that imbalance index, introduced in Kimmel et al. (Genetics 148 (1998) 1921) and King et al. (Mol. Biol. Evol. 17(12) (2000) 1895) in the context of continuous-time models, may also be used in discrete-time models to detect past population growth. PMID:15560913
NASA Astrophysics Data System (ADS)
Vivaldi, Franco
2015-12-01
The concept of resonance has been instrumental to the study of Hamiltonian systems with divided phase space. One can also define such systems over discrete spaces, which have a finite or countable number of points, but in this new setting the notion of resonance must be re-considered from scratch. I review some recent developments in the area of arithmetic dynamics which outline some salient features of linear and nonlinear stable (elliptic) orbits over a discrete space, and also underline the difficulties that emerge in their analysis.
NASA Astrophysics Data System (ADS)
Vivaldi, Franco
The concept of resonance has been instrumental to the study of Hamiltonian systems with divided phase space. One can also define such systems over discrete spaces, which have a finite or countable number of points, but in this new setting the notion of resonance must be re-considered from scratch. I review some recent developments in the area of arithmetic dynamics which outline some salient features of linear and nonlinear stable (elliptic) orbits over a discrete space, and also underline the difficulties that emerge in their analysis.
Multilevel Interventions: Measurement and Measures
Charns, Martin P.; Alligood, Elaine C.; Benzer, Justin K.; Burgess, James F.; Mcintosh, Nathalie M.; Burness, Allison; Partin, Melissa R.; Clauser, Steven B.
2012-01-01
Background Multilevel intervention research holds the promise of more accurately representing real-life situations and, thus, with proper research design and measurement approaches, facilitating effective and efficient resolution of health-care system challenges. However, taking a multilevel approach to cancer care interventions creates both measurement challenges and opportunities. Methods One-thousand seventy two cancer care articles from 2005 to 2010 were reviewed to examine the state of measurement in the multilevel intervention cancer care literature. Ultimately, 234 multilevel articles, 40 involving cancer care interventions, were identified. Additionally, literature from health services, social psychology, and organizational behavior was reviewed to identify measures that might be useful in multilevel intervention research. Results The vast majority of measures used in multilevel cancer intervention studies were individual level measures. Group-, organization-, and community-level measures were rarely used. Discussion of the independence, validity, and reliability of measures was scant. Discussion Measurement issues may be especially complex when conducting multilevel intervention research. Measurement considerations that are associated with multilevel intervention research include those related to independence, reliability, validity, sample size, and power. Furthermore, multilevel intervention research requires identification of key constructs and measures by level and consideration of interactions within and across levels. Thus, multilevel intervention research benefits from thoughtful theory-driven planning and design, an interdisciplinary approach, and mixed methods measurement and analysis. PMID:22623598
Recent developments in multilevel optimization
NASA Technical Reports Server (NTRS)
Vanderplaats, Garret N.; Kim, D.-S.
1989-01-01
Recent developments in multilevel optimization are briefly reviewed. The general nature of the multilevel design task, the use of approximations to develop and solve the analysis design task, the structure of the formal multidiscipline optimization problem, a simple cantilevered beam which demonstrates the concepts of multilevel design and the basic mathematical details of the optimization task and the system level are among the topics discussed.
Cross-Classification Multilevel Logistic Models in Psychometrics
ERIC Educational Resources Information Center
Van den Noortgate, Wim; De Boeck, Paul; Meulders, Michel
2003-01-01
In IRT models, responses are explained on the basis of person and item effects. Person effects are usually defined as a random sample from a population distribution. Regular IRT models therefore can be formulated as multilevel models, including a within-person part and a between-person part. In a similar way, the effects of the items can be…
Multilevel Methods for the Poisson-Boltzmann Equation
NASA Astrophysics Data System (ADS)
Holst, Michael Jay
We consider the numerical solution of the Poisson -Boltzmann equation (PBE), a three-dimensional second order nonlinear elliptic partial differential equation arising in biophysics. This problem has several interesting features impacting numerical algorithms, including discontinuous coefficients representing material interfaces, rapid nonlinearities, and three spatial dimensions. Similar equations occur in various applications, including nuclear physics, semiconductor physics, population genetics, astrophysics, and combustion. In this thesis, we study the PBE, discretizations, and develop multilevel-based methods for approximating the solutions of these types of equations. We first outline the physical model and derive the PBE, which describes the electrostatic potential of a large complex biomolecule lying in a solvent. We next study the theoretical properties of the linearized and nonlinear PBE using standard function space methods; since this equation has not been previously studied theoretically, we provide existence and uniqueness proofs in both the linearized and nonlinear cases. We also analyze box-method discretizations of the PBE, establishing several properties of the discrete equations which are produced. In particular, we show that the discrete nonlinear problem is well-posed. We study and develop linear multilevel methods for interface problems, based on algebraic enforcement of Galerkin or variational conditions, and on coefficient averaging procedures. Using a stencil calculus, we show that in certain simplified cases the two approaches are equivalent, with different averaging procedures corresponding to different prolongation operators. We also develop methods for nonlinear problems based on a nonlinear multilevel method, and on linear multilevel methods combined with a globally convergent damped-inexact-Newton method. We derive a necessary and sufficient descent condition for the inexact-Newton direction, enabling the development of extremely
Multilevel structural equation models for assessing moderation within and across levels of analysis.
Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J
2016-06-01
Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record PMID:26651982
Muir, William M; Bijma, P; Schinckel, A
2013-01-01
An experiment was conducted comparing multilevel selection in Japanese quail for 43 days weight and survival with birds housed in either kin (K) or random (R) groups. Multilevel selection significantly reduced mortality (6.6% K vs. 8.5% R) and increased weight (1.30 g/MG K vs. 0.13 g/MG R) resulting in response an order of magnitude greater with Kin than Random. Thus, multilevel selection was effective in reducing detrimental social interactions, which contributed to improved weight gain. The observed rates of response did not differ significantly from expected, demonstrating that current theory is adequate to explain multilevel selection response. Based on estimated genetic parameters, group selection would always be superior to any other combination of multilevel selection. Further, near optimal results could be attained using multilevel selection if 20% of the weight was on the group component regardless of group composition. Thus, in nature the conditions for multilevel selection to be effective in bringing about social change maybe common. In terms of a sustainability of breeding programs, multilevel selection is easy to implement and is expected to give near optimal responses with reduced rates of inbreeding as compared to group selection, the only requirement is that animals be housed in kin groups. PMID:23730755
Multilevel Modeling of Social Segregation
ERIC Educational Resources Information Center
Leckie, George; Pillinger, Rebecca; Jones, Kelvyn; Goldstein, Harvey
2012-01-01
The traditional approach to measuring segregation is based upon descriptive, non-model-based indices. A recently proposed alternative is multilevel modeling. The authors further develop the argument for a multilevel modeling approach by first describing and expanding upon its notable advantages, which include an ability to model segregation at a…
A Primer on Multilevel Modeling
ERIC Educational Resources Information Center
Hayes, Andrew F.
2006-01-01
Multilevel modeling (MLM) is growing in use throughout the social sciences. Although daunting from a mathematical perspective, MLM is relatively easy to employ once some basic concepts are understood. In this article, I present a primer on MLM, describing some of these principles and applying them to the analysis of a multilevel data set on…
NASA Astrophysics Data System (ADS)
Wuensche, Andrew
DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.
Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest. PMID:18624656
Brown, C Hendricks; Wang, Wei; Kellam, Sheppard G; Muthén, Bengt O; Petras, Hanno; Toyinbo, Peter; Poduska, Jeanne; Ialongo, Nicholas; Wyman, Peter A; Chamberlain, Patricia; Sloboda, Zili; MacKinnon, David P; Windham, Amy
2008-06-01
Randomized field trials provide unique opportunities to examine the effectiveness of an intervention in real world settings and to test and extend both theory of etiology and theory of intervention. These trials are designed not only to test for overall intervention impact but also to examine how impact varies as a function of individual level characteristics, context, and across time. Examination of such variation in impact requires analytical methods that take into account the trial's multiple nested structure and the evolving changes in outcomes over time. The models that we describe here merge multilevel modeling with growth modeling, allowing for variation in impact to be represented through discrete mixtures--growth mixture models--and nonparametric smooth functions--generalized additive mixed models. These methods are part of an emerging class of multilevel growth mixture models, and we illustrate these with models that examine overall impact and variation in impact. In this paper, we define intent-to-treat analyses in group-randomized multilevel field trials and discuss appropriate ways to identify, examine, and test for variation in impact without inflating the Type I error rate. We describe how to make causal inferences more robust to misspecification of covariates in such analyses and how to summarize and present these interactive intervention effects clearly. Practical strategies for reducing model complexity, checking model fit, and handling missing data are discussed using six randomized field trials to show how these methods may be used across trials randomized at different levels. PMID:18215473
Discrete Fractional Diffusion Equation of Chaotic Order
NASA Astrophysics Data System (ADS)
Wu, Guo-Cheng; Baleanu, Dumitru; Xie, He-Ping; Zeng, Sheng-Da
Discrete fractional calculus is suggested in diffusion modeling in porous media. A variable-order fractional diffusion equation is proposed on discrete time scales. A function of the variable order is constructed by a chaotic map. The model shows some new random behaviors in comparison with other variable-order cases.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
Su, Gui-Jia
2003-06-10
A multilevel DC link inverter and method for improving torque response and current regulation in permanent magnet motors and switched reluctance motors having a low inductance includes a plurality of voltage controlled cells connected in series for applying a resulting dc voltage comprised of one or more incremental dc voltages. The cells are provided with switches for increasing the resulting applied dc voltage as speed and back EMF increase, while limiting the voltage that is applied to the commutation switches to perform PWM or dc voltage stepping functions, so as to limit current ripple in the stator windings below an acceptable level, typically 5%. Several embodiments are disclosed including inverters using IGBT's, inverters using thyristors. All of the inverters are operable in both motoring and regenerating modes.
Wang, S; Huang, G H; Zhou, Y
2016-05-01
In this study, a multi-level factorial-vertex fuzzy-stochastic programming (MFFP) approach is developed for optimization of water resources systems under probabilistic and possibilistic uncertainties. MFFP is capable of tackling fuzzy parameters at various combinations of α-cut levels, reflecting distinct attitudes of decision makers towards fuzzy parameters in the fuzzy discretization process based on the α-cut concept. The potential interactions among fuzzy parameters can be explored through a multi-level factorial analysis. A water resources management problem with fuzzy and random features is used to demonstrate the applicability of the proposed methodology. The results indicate that useful solutions can be obtained for the optimal allocation of water resources under fuzziness and randomness. They can help decision makers to identify desired water allocation schemes with maximized total net benefits. A variety of decision alternatives can also be generated under different scenarios of water management policies. The findings from the factorial experiment reveal the interactions among design factors (fuzzy parameters) and their curvature effects on the total net benefit, which are helpful in uncovering the valuable information hidden beneath the parameter interactions affecting system performance. A comparison between MFFP and the vertex method is also conducted to demonstrate the merits of the proposed methodology. PMID:26922500
Multilevel techniques for nonelliptic problems
NASA Technical Reports Server (NTRS)
Jespersen, D. C.
1981-01-01
Multigrid and multilevel methods are extended to the solution of nonelliptic problems. A framework for analyzing these methods is established. A simple nonelliptic problem is given, and it is shown how a multilevel technique can be used for its solution. Emphasis is on smoothness properties of eigenvectors and attention is drawn to the possibility of conditioning the eigensystem so that eigenvectors have the desired smoothness properties.
Enders, Craig K; Mistler, Stephen A; Keller, Brian T
2016-06-01
Although missing data methods have advanced in recent years, methodologists have devoted less attention to multilevel data structures where observations at level-1 are nested within higher-order organizational units at level-2 (e.g., individuals within neighborhoods; repeated measures nested within individuals; students nested within classrooms). Joint modeling and chained equations imputation are the principal imputation frameworks for single-level data, and both have multilevel counterparts. These approaches differ algorithmically and in their functionality; both are appropriate for simple random intercept analyses with normally distributed data, but they differ beyond that. The purpose of this paper is to describe multilevel imputation strategies and evaluate their performance in a variety of common analysis models. Using multiple imputation theory and computer simulations, we derive 4 major conclusions: (a) joint modeling and chained equations imputation are appropriate for random intercept analyses; (b) the joint model is superior for analyses that posit different within- and between-cluster associations (e.g., a multilevel regression model that includes a level-1 predictor and its cluster means, a multilevel structural equation model with different path values at level-1 and level-2); (c) chained equations imputation provides a dramatic improvement over joint modeling in random slope analyses; and (d) a latent variable formulation for categorical variables is quite effective. We use a real data analysis to demonstrate multilevel imputation, and we suggest a number of avenues for future research. (PsycINFO Database Record PMID:26690775
Multilevel analysis in road safety research.
Dupont, Emmanuelle; Papadimitriou, Eleonora; Martensen, Heike; Yannis, George
2013-11-01
Hierarchical structures in road safety data are receiving increasing attention in the literature and multilevel (ML) models are proposed for appropriately handling the resulting dependences among the observations. However, so far no empirical synthesis exists of the actual added value of ML modelling techniques as compared to other modelling approaches. This paper summarizes the statistical and conceptual background and motivations for multilevel analyses in road safety research. It then provides a review of several ML analyses applied to aggregate and disaggregate (accident) data. In each case, the relevance of ML modelling techniques is assessed by examining whether ML model formulations (i) allow improving the fit of the model to the data, (ii) allow identifying and explaining random variation at specific levels of the hierarchy considered, and (iii) yield different (more correct) conclusions than single-level model formulations with respect to the significance of the parameter estimates. The evidence reviewed offers different conclusions depending on whether the analysis concerns aggregate data or disaggregate data. In the first case, the application of ML analysis techniques appears straightforward and relevant. The studies based on disaggregate accident data, on the other hand, offer mixed findings: computational problems can be encountered, and ML applications are not systematically necessary. The general recommendation concerning disaggregate accident data is to proceed to a preliminary investigation of the necessity of ML analyses and of the additional information to be expected from their application. PMID:23769622
Multilevel turbulence simulations
Tziperman, E.
1994-12-31
The authors propose a novel method for the simulation of turbulent flows, that is motivated by and based on the Multigrid (MG) formalism. The method, called Multilevel Turbulence Simulations (MTS), is potentially more efficient and more accurate than LES. In many physical problems one is interested in the effects of the small scales on the larger ones, or in a typical realization of the flow, and not in the detailed time history of each small scale feature. MTS takes advantage of the fact that the detailed simulation of small scales is not needed at all times, in order to make the calculation significantly more efficient, while accurately accounting for the effects of the small scales on the larger scale of interest. In MTS, models of several resolutions are used to represent the turbulent flow. The model equations in each coarse level incorporate a closure term roughly corresponding to the tau correction in the MG formalism that accounts for the effects of the unresolvable scales on that grid. The finer resolution grids are used only a small portion of the simulation time in order to evaluate the closure terms for the coarser grids, while the coarse resolution grids are then used to accurately and efficiently calculate the evolution of the larger scales. The methods efficiency relative to direct simulations is of the order of the ratio of required integration time to the smallest eddies turnover time, potentially resulting in orders of magnitude improvement for a large class of turbulence problems.
Multilevel fusion exploitation
NASA Astrophysics Data System (ADS)
Lindberg, Perry C.; Dasarathy, Belur V.; McCullough, Claire L.
1996-06-01
This paper describes a project that was sponsored by the U.S. Army Space and Strategic Defense Command (USASSDC) to develop, test, and demonstrate sensor fusion algorithms for target recognition. The purpose of the project was to exploit the use of sensor fusion at all levels (signal, feature, and decision levels) and all combinations to improve target recognition capability against tactical ballistic missile (TBM) targets. These algorithms were trained with simulated radar signatures to accurately recognize selected TBM targets. The simulated signatures represent measurements made by two radars (S-band and X- band) with the targets at a variety of aspect and roll angles. Two tests were conducted: one with simulated signatures collected at angles different from those in the training database and one using actual test data. The test results demonstrate a high degree of recognition accuracy. This paper describes the training and testing techniques used; shows the fusion strategy employed; and illustrates the advantages of exploiting multi-level fusion.
A New Approach for Estimating a Nonlinear Growth Component in Multilevel Modeling
ERIC Educational Resources Information Center
Tolvanen, Asko; Kiuru, Noona; Leskinen, Esko; Hakkarainen, Kai; Inkinen, Mikko; Lonka, Kirsti; Salmela-Aro, Katariina
2011-01-01
This study presents a new approach to estimation of a nonlinear growth curve component with fixed and random effects in multilevel modeling. This approach can be used to estimate change in longitudinal data, such as day-of-the-week fluctuation. The motivation of the new approach is to avoid spurious estimates in a random coefficient regression…
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N). PMID:18830332
ERIC Educational Resources Information Center
Ghezzi, Patrick M.
2007-01-01
The advantages of emphasizing discrete trials "teaching" over discrete trials "training" are presented first, followed by a discussion of discrete trials as a method of teaching that emerged historically--and as a matter of necessity for difficult learners such as those with autism--from discrete trials as a method for laboratory research. The…
Multilevel solvers of first-order system least-squares for Stokes equations
Lai, Chen-Yao G.
1996-12-31
Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.
Multilevel codes and multistage decoding
NASA Astrophysics Data System (ADS)
Calderbank, A. R.
1989-03-01
Imai and Hirakawa have proposed (1977) a multilevel coding method based on binary block codes that admits a staged decoding procedure. Here the coding method is extended to coset codes and it is shown how to calculate minimum squared distance and path multiplicity in terms of the norms and multiplicities of the different cosets. The multilevel structure allows the redundancy in the coset selection procedure to be allocated efficiently among the different levels. It also allows the use of suboptimal multistage decoding procedures that have performance/complexity advantages over maximum-likelihood decoding.
Multilevel Ensemble Transform Particle Filtering
NASA Astrophysics Data System (ADS)
Gregory, Alastair; Cotter, Colin; Reich, Sebastian
2016-04-01
This presentation extends the Multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, Multilevel Monte Carlo is applied to a certain variant of the particle filter, the Ensemble Transform Particle Filter (ETPF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles.
Multi-level methods and approximating distribution functions
NASA Astrophysics Data System (ADS)
Wilson, D.; Baker, R. E.
2016-07-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
A General Multilevel SEM Framework for Assessing Multilevel Mediation
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zyphur, Michael J.; Zhang, Zhen
2010-01-01
Several methods for testing mediation hypotheses with 2-level nested data have been proposed by researchers using a multilevel modeling (MLM) paradigm. However, these MLM approaches do not accommodate mediation pathways with Level-2 outcomes and may produce conflated estimates of between- and within-level components of indirect effects. Moreover,…
Comparing Spatial and Multilevel Regression Models for Binary Outcomes in Neighborhood Studies
Xu, Hongwei
2013-01-01
The standard multilevel regressions that are widely used in neighborhood research typically ignore potential between-neighborhood correlations due to underlying spatial processes, and hence produce inappropriate inferences about neighborhood effects. In contrast, spatial models make estimations and predictions across areas by explicitly modeling the spatial correlations among observations in different locations. A better understanding of the strengths and limitations of spatial models as compared to the standard multilevel model is needed to improve the research on neighborhood and spatial effects. This research systematically compares model estimations and predictions for binary outcomes between (distance- and lattice-based) spatial and the standard multilevel models in the presence of both within- and between-neighborhood correlations, through simulations. Results from simulation analysis reveal that the standard multilevel and spatial models produce similar estimates of fixed effects, but different estimates of random effects variances. Both the standard multilevel and pure spatial models tend to overestimate the corresponding random effects variances, compared to hybrid models when both non-spatial within neighborhood and spatial between-neighborhood effects exist. Spatial models also outperform the standard multilevel model by a narrow margin in case of fully out-of-sample predictions. Distance-based spatial models provide extra spatial information and have stronger predictive power than lattice-based models under certain circumstances. These merits of spatial modeling are exhibited in an empirical analysis of the child mortality data from 1880 Newark, New Jersey. PMID:25284905
Three-dimensional discrete ordinates reactor assembly calculations on GPUs
Evans, Thomas M; Joubert, Wayne; Hamilton, Steven P; Johnson, Seth R; Turner, John A; Davidson, Gregory G; Pandya, Tara M
2015-01-01
In this paper we describe and demonstrate a discrete ordinates sweep algorithm on GPUs. This sweep algorithm is nested within a multilevel comunication-based decomposition based on energy. We demonstrated the effectiveness of this algorithm on detailed three-dimensional critical experiments and PWR lattice problems. For these problems we show improvement factors of 4 6 over conventional communication-based, CPU-only sweeps. These sweep kernel speedups resulted in a factor of 2 total time-to-solution improvement.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
NASA Astrophysics Data System (ADS)
Xiao, Qijun; Krotkov, Robert; Tuominen, Mark
2006-03-01
Digital data storage technology generally relies on a binary storage paradigm. In this work we explore a different scheme that exploits the stepwise, multilevel total magnetization of a small cluster of interacting nanomagnets. The magnetization of a cluster can be resolved more easily than that of a single nanomagnet, due to the larger lateral size. Micromagnetic simulations, based on the Landau-Lifshitz-Gilbert (LLG) equation with parameters representative of Co3Pt, reveal that magnetostatic interactions within a cluster produce a rich multilevel magnetic response, each level providing a stable remanent magnetization state. This work describes simulations used to investigate a multilevel data storage unit based on a hexagonal cluster of interacting uniaxial single domain nanomagnets. The accessibility and stability of the discrete magnetization states are studied. The switching properties of the nanomagnet clusters can be tuned by modifying the geometry, providing the ability to engineer desirable magnetic properties.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Tree Ensembles on the Induced Discrete Space.
Yildiz, Olcay Taner
2016-05-01
Decision trees are widely used predictive models in machine learning. Recently, K -tree is proposed, where the original discrete feature space is expanded by generating all orderings of values of k discrete attributes and these orderings are used as the new attributes in decision tree induction. Although K -tree performs significantly better than the proper one, their exponential time complexity can prohibit their use. In this brief, we propose K -forest, an extension of random forest, where a subset of features is selected randomly from the induced discrete space. Simulation results on 17 data sets show that the novel ensemble classifier has significantly lower error rate compared with the random forest based on the original feature space. PMID:26011897
Go, Vivian F.; Frangakis, Constantine; Minh, Nguyen Le; Latkin, Carl; Ha, Tran Viet; Mo, Tran Thi; Sripaipan, Teerada; Davis, Wendy W.; Zelaya, Carla; Vu, Pham The; Celentano, David D.; Quan, Vu Minh
2015-01-01
Introduction Injecting drug use is a primary driver of HIV epidemics in many countries. People who inject drugs (PWID) and are HIV infected are often doubly stigmatized and many encounter difficulties reducing risk behaviors. Prevention interventions for HIV-infected PWID that provide enhanced support at the individual, family, and community level to facilitate risk-reduction are needed. Methods 455 HIV-infected PWID and 355 of their HIV negative injecting network members living in 32 sub-districts in Thai Nguyen Province were enrolled. We conducted a two-stage randomization: First, sub-districts were randomized to either a community video screening and house-to-house visits or standard of care educational pamphlets. Second, within each sub-district, participants were randomized to receive either enhanced individual level post-test counseling and group support sessions or standard of care HIV testing and counseling. This resulted in four arms: 1) standard of care; 2) community level intervention; 3) individual level intervention; and 4) community plus individual intervention. Follow-up was conducted at 6, 12, 18, and 24 months. Primary outcomes were self-reported HIV injecting and sexual risk behaviors. Secondary outcomes included HIV incidence among HIV negative network members. Results Fewer participants reported sharing injecting equipment and unprotected sex from baseline to 24 months in all arms (77% to 4% and 24% to 5% respectively). There were no significant differences at the 24-month visit among the 4 arms (Wald = 3.40 (3 df); p = 0.33; Wald = 6.73 (3 df); p = 0.08). There were a total of 4 HIV seroconversions over 24 months with no significant difference between intervention and control arms. Discussion Understanding the mechanisms through which all arms, particularly the control arm, demonstrated both low risk behaviors and low HIV incidence has important implications for policy and prevention programming. Trial Registration ClinicalTrials.gov NCT
Modeling Repeatable Events Using Discrete-Time Data: Predicting Marital Dissolution
ERIC Educational Resources Information Center
Teachman, Jay
2011-01-01
I join two methodologies by illustrating the application of multilevel modeling principles to hazard-rate models with an emphasis on procedures for discrete-time data that contain repeatable events. I demonstrate this application using data taken from the 1995 National Survey of Family Growth (NSFG) to ascertain the relationship between multiple…
Discreteness inducing coexistence
NASA Astrophysics Data System (ADS)
dos Santos, Renato Vieira
2013-12-01
Consider two species that diffuse through space. Consider further that they differ only in initial densities and, possibly, in diffusion constants. Otherwise they are identical. What happens if they compete with each other in the same environment? What is the influence of the discrete nature of the interactions on the final destination? And what are the influence of diffusion and additive fluctuations corresponding to random migration and immigration of individuals? This paper aims to answer these questions for a particular competition model that incorporates intra and interspecific competition between the species. Based on mean field theory, the model has a stationary state dependent on the initial density conditions. We investigate how this initial density dependence is affected by the presence of demographic multiplicative noise and additive noise in space and time. There are three main conclusions: (1) Additive noise favors denser populations at the expense of the less dense, ratifying the competitive exclusion principle. (2) Demographic noise, on the other hand, favors less dense populations at the expense of the denser ones, inducing equal densities at the quasi-stationary state, violating the aforementioned principle. (3) The slower species always suffers the more deleterious effects of statistical fluctuations in a homogeneous medium.
Principles of Discrete Time Mechanics
NASA Astrophysics Data System (ADS)
Jaroszkiewicz, George
2014-04-01
1. Introduction; 2. The physics of discreteness; 3. The road to calculus; 4. Temporal discretization; 5. Discrete time dynamics architecture; 6. Some models; 7. Classical cellular automata; 8. The action sum; 9. Worked examples; 10. Lee's approach to discrete time mechanics; 11. Elliptic billiards; 12. The construction of system functions; 13. The classical discrete time oscillator; 14. Type 2 temporal discretization; 15. Intermission; 16. Discrete time quantum mechanics; 17. The quantized discrete time oscillator; 18. Path integrals; 19. Quantum encoding; 20. Discrete time classical field equations; 21. The discrete time Schrodinger equation; 22. The discrete time Klein-Gordon equation; 23. The discrete time Dirac equation; 24. Discrete time Maxwell's equations; 25. The discrete time Skyrme model; 26. Discrete time quantum field theory; 27. Interacting discrete time scalar fields; 28. Space, time and gravitation; 29. Causality and observation; 30. Concluding remarks; Appendix A. Coherent states; Appendix B. The time-dependent oscillator; Appendix C. Quaternions; Appendix D. Quantum registers; References; Index.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
Smorgick, Yossi; Park, Daniel K.; Baker, Kevin C; Lurie, Jon D.; Tosteson, Tor D.; Zhao, Wenyan; Herkowitz, Harry; Fischgrund, Jeffrey S; Weinstein, James N.
2013-01-01
Study design A subanalysis study. Objective To compare surgical outcomes and complications of multi level decompression and single level fusion to multi level decompression and multi level fusion for patients with multilevel lumbar stenosis and single level degenerative spondylolisthesis. Summary of Background Data In patients with degenerative spondylolisthesis who are treated surgically, decompression and fusion provides a better clinical outcome than decompression alone. Surgical treatment for multilevel lumbar stenosis and degenerative spondylolisthesis typically includes decompression and fusion of the spondylolisthesis segment and decompression with or without fusion for the other stenotic segments. To date, no study has compared the results of these two surgical options for single level degenerative spondylolisthesis with multilevel stenosis. Methods The results from a multicenter randomized and observational study, the Spine Patient Outcomes Research Trial (SPORT) comparing multilevel decompression and single level fusion and multi level decompression and multi level fusion for spinal stenosis with spondylolisthesis, were analyzed. The primary outcomes measures were the Bodily Pain and Physical Function scales of the Medical Outcomes Study 36-item Short-Form General Health Survey (SF-36) and the modified Oswestry Disability Index at 1,2, 3 and 4 years postoperatively. Secondary analysis consisted of stenosis bothersomeness index, low back pain bothersomeness, leg pain, patient satisfaction, and self-rated progress. Results Overall 207 patients were enrolled to the study, 130 had multlilevel decompression with one level fusion and 77 patients had multi level decompression and multi-level fusion. For all primary and secondary outcome measures, there were no statistically significant differences in surgical outcomes between the two surgical techniques. However, operative time and intraoperative blood loss were significantly higher in the multilevel fusion
Within-Cluster and Across-Cluster Matching with Observational Multilevel Data
ERIC Educational Resources Information Center
Kim, Jee-Seon; Steiner, Peter M.; Hall, Courtney; Thoemmes, Felix
2013-01-01
When randomized experiments cannot be conducted in practice, propensity score (PS) techniques for matching treated and control units are frequently used for estimating causal treatment effects from observational data. Despite the popularity of PS techniques, they are not yet well studied for matching multilevel data where selection into treatment…
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
ERIC Educational Resources Information Center
McCormick, John; Barnett, Kerry
2008-01-01
Purpose: The purpose of this paper was to posit and test hypotheses concerned with relationships between teachers' demographics, locus of control and career stages. Design/methodology/approach: A sample consisting of 416 Australian non-executive high school teachers was gathered from 40 randomly selected high schools. Multilevel regression…
Wieselquist, William A.; Anistratov, Dmitriy Y.; Morel, Jim E.
2014-09-15
We present a quasidiffusion (QD) method for solving neutral particle transport problems in Cartesian XY geometry on unstructured quadrilateral meshes, including local refinement capability. Neutral particle transport problems are central to many applications including nuclear reactor design, radiation safety, astrophysics, medical imaging, radiotherapy, nuclear fuel transport/storage, shielding design, and oil well-logging. The primary development is a new discretization of the low-order QD (LOQD) equations based on cell-local finite differences. The accuracy of the LOQD equations depends on proper calculation of special non-linear QD (Eddington) factors from a transport solution. In order to completely define the new QD method, a proper discretization of the transport problem is also presented. The transport equation is discretized by a conservative method of short characteristics with a novel linear approximation of the scattering source term and monotonic, parabolic representation of the angular flux on incoming faces. Analytic and numerical tests are used to test the accuracy and spatial convergence of the non-linear method. All tests exhibit O(h{sup 2}) convergence of the scalar flux on orthogonal, random, and multi-level meshes.
Bayesian approach to global discrete optimization
Mockus, J.; Mockus, A.; Mockus, L.
1994-12-31
We discuss advantages and disadvantages of the Bayesian approach (average case analysis). We present the portable interactive version of software for continuous global optimization. We consider practical multidimensional problems of continuous global optimization, such as optimization of VLSI yield, optimization of composite laminates, estimation of unknown parameters of bilinear time series. We extend Bayesian approach to discrete optimization. We regard the discrete optimization as a multi-stage decision problem. We assume that there exists some simple heuristic function which roughly predicts the consequences of the decisions. We suppose randomized decisions. We define the probability of the decision by the randomized decision function depending on heuristics. We fix this function with exception of some parameters. We repeat the randomized decision several times at the fixed values of those parameters and accept the best decision as the result. We optimize the parameters of the randomized decision function to make the search more efficient. Thus we reduce the discrete optimization problem to the continuous problem of global stochastic optimization. We solve this problem by the Bayesian methods of continuous global optimization. We describe the applications to some well known An problems of discrete programming, such as knapsack, traveling salesman, and scheduling.
Electrolytic plating apparatus for discrete microsized particles
Mayer, Anton
1976-11-30
Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.
Electroless plating apparatus for discrete microsized particles
Mayer, Anton
1978-01-01
Method and apparatus are disclosed for producing very uniform coatings of a desired material on discrete microsized particles by electroless techniques. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with each other for a time sufficient for such to occur.
Conducting Multilevel Analyses in Medical Education
ERIC Educational Resources Information Center
Zyphur, Michael J.; Kaplan, Seth A.; Islam, Gazi; Barsky, Adam P.; Franklin, Michael S.
2008-01-01
A significant body of education literature has begun using multilevel statistical models to examine data that reside at multiple levels of analysis. In order to provide a primer for medical education researchers, the current work gives a brief overview of some issues associated with multilevel statistical modeling. To provide an example of this…
A Multilevel Assessment of Differential Item Functioning.
ERIC Educational Resources Information Center
Shen, Linjun
A multilevel approach was proposed for the assessment of differential item functioning and compared with the traditional logistic regression approach. Data from the Comprehensive Osteopathic Medical Licensing Examination for 2,300 freshman osteopathic medical students were analyzed. The multilevel approach used three-level hierarchical generalized…
Multilevel Interventions: Study Design and Analysis Issues
Gross, Cary P.; Zaslavsky, Alan M.; Taplin, Stephen H.
2012-01-01
Multilevel interventions, implemented at the individual, physician, clinic, health-care organization, and/or community level, increasingly are proposed and used in the belief that they will lead to more substantial and sustained changes in behaviors related to cancer prevention, detection, and treatment than would single-level interventions. It is important to understand how intervention components are related to patient outcomes and identify barriers to implementation. Designs that permit such assessments are uncommon, however. Thus, an important way of expanding our knowledge about multilevel interventions would be to assess the impact of interventions at different levels on patients as well as the independent and synergistic effects of influences from different levels. It also would be useful to assess the impact of interventions on outcomes at different levels. Multilevel interventions are much more expensive and complicated to implement and evaluate than are single-level interventions. Given how little evidence there is about the value of multilevel interventions, however, it is incumbent upon those arguing for this approach to do multilevel research that explicates the contributions that interventions at different levels make to the desired outcomes. Only then will we know whether multilevel interventions are better than more focused interventions and gain greater insights into the kinds of interventions that can be implemented effectively and efficiently to improve health and health care for individuals with cancer. This chapter reviews designs for assessing multilevel interventions and analytic ways of controlling for potentially confounding variables that can account for the complex structure of multilevel data. PMID:22623596
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
Griebel, M.
1994-12-31
In recent years, it has turned out that many modern iterative algorithms (multigrid schemes, multilevel preconditioners, domain decomposition methods etc.) for solving problems resulting from the discretization of PDEs can be interpreted as additive (Jacobi-like) or multiplicative (Gauss-Seidel-like) subspace correction methods. The key to their analysis is the study of certain metric properties of the underlying splitting of the discretization space V into a sum of subspaces V{sub j}, j = 1{hor_ellipsis}, J resp. of the variational problem on V into auxiliary problems on these subspaces. Here, the author proposes a modified approach to the abstract convergence theory of these additive and multiplicative Schwarz iterative methods, that makes the relation to traditional iteration methods more explicit. To this end he introduces the enlarged Hilbert space V = V{sub 0} x {hor_ellipsis} x V{sub j} which is nothing else but the usual construction of the Cartesian product of the Hilbert spaces V{sub j} and use it now in the discretization process. This results in an enlarged, semidefinite linear system to be solved instead of the usual definite system. Then, modern multilevel methods as well as domain decomposition methods simplify to just traditional (block-) iteration methods. Now, the convergence analysis can be carried out directly for these traditional iterations on the enlarged system, making convergence proofs of multilevel and domain decomposition methods more clear, or, at least, more classical. The terms that enter the convergence proofs are exactly the ones of the classical iterative methods. It remains to estimate them properly. The convergence proof itself follow basically line by line the old proofs of the respective traditional iterative methods. Additionally, new multilevel/domain decomposition methods are constructed straightforwardly by now applying just other old and well known traditional iterative methods to the enlarged system.
A multilevel Cartesian non-uniform grid time domain algorithm
Meng Jun; Boag, Amir; Lomakin, Vitaliy; Michielssen, Eric
2010-11-01
A multilevel Cartesian non-uniform grid time domain algorithm (CNGTDA) is introduced to rapidly compute transient wave fields radiated by time dependent three-dimensional source constellations. CNGTDA leverages the observation that transient wave fields generated by temporally bandlimited and spatially confined source constellations can be recovered via interpolation from appropriately delay- and amplitude-compensated field samples. This property is used in conjunction with a multilevel scheme, in which the computational domain is hierarchically decomposed into subdomains with sparse non-uniform grids used to obtain the fields. For both surface and volumetric source distributions, the computational cost of CNGTDA to compute the transient field at N{sub s} observation locations from N{sub s} collocated sources for N{sub t} discrete time instances scales as O(N{sub t}N{sub s}logN{sub s}) and O(N{sub t}N{sub s}log{sup 2}N{sub s}) in the low- and high-frequency regimes, respectively. Coupled with marching-on-in-time (MOT) time domain integral equations, CNGTDA can facilitate efficient analysis of large scale time domain electromagnetic and acoustic problems.
Morris, J; Johnson, S
2007-12-03
The Distinct Element Method (also frequently referred to as the Discrete Element Method) (DEM) is a Lagrangian numerical technique where the computational domain consists of discrete solid elements which interact via compliant contacts. This can be contrasted with Finite Element Methods where the computational domain is assumed to represent a continuum (although many modern implementations of the FEM can accommodate some Distinct Element capabilities). Often the terms Discrete Element Method and Distinct Element Method are used interchangeably in the literature, although Cundall and Hart (1992) suggested that Discrete Element Methods should be a more inclusive term covering Distinct Element Methods, Displacement Discontinuity Analysis and Modal Methods. In this work, DEM specifically refers to the Distinct Element Method, where the discrete elements interact via compliant contacts, in contrast with Displacement Discontinuity Analysis where the contacts are rigid and all compliance is taken up by the adjacent intact material.
Multigrid and multilevel domain decomposition for unstructured grids
Chan, T.; Smith, B.
1994-12-31
Multigrid has proven itself to be a very versatile method for the iterative solution of linear and nonlinear systems of equations arising from the discretization of PDES. In some applications, however, no natural multilevel structure of grids is available, and these must be generated as part of the solution procedure. In this presentation the authors will consider the problem of generating a multigrid algorithm when only a fine, unstructured grid is given. Their techniques generate a sequence of coarser grids by first forming an approximate maximal independent set of the vertices and then applying a Cavendish type algorithm to form the coarser triangulation. Numerical tests indicate that convergence using this approach can be as fast as standard multigrid on a structured mesh, at least in two dimensions.
Synchronous Discrete Harmonic Oscillator
Antippa, Adel F.; Dubois, Daniel M.
2008-10-17
We introduce the synchronous discrete harmonic oscillator, and present an analytical, numerical and graphical study of its characteristics. The oscillator is synchronous when the time T for one revolution covering an angle of 2{pi} in phase space, is an integral multiple N of the discrete time step {delta}t. It is fully synchronous when N is even. It is pseudo-synchronous when T/{delta}t is rational. In the energy conserving hyperincursive representation, the phase space trajectories are perfectly stable at all time scales, and in both synchronous and pseudo-synchronous modes they cycle through a finite number of phase space points. Consequently, both the synchronous and the pseudo-synchronous hyperincursive modes of time-discretization provide a physically realistic and mathematically coherent, procedure for dynamic, background independent, discretization of spacetime. The procedure is applicable to any stable periodic dynamical system, and provokes an intrinsic correlation between space and time, whereby space-discretization is a direct consequence of background-independent time-discretization. Hence, synchronous discretization moves the formalism of classical mechanics towards that of special relativity. The frequency of the hyperincursive discrete harmonic oscillator is ''blue shifted'' relative to its continuum counterpart. The frequency shift has the precise value needed to make the speed of the system point in phase space independent of the discretizing time interval {delta}t. That is the speed of the system point is the same on the polygonal (in the discrete case) and the circular (in the continuum case) phase space trajectories.
Comparison of Estimation Procedures for Multilevel AR(1) Models.
Krone, Tanja; Albers, Casper J; Timmerman, Marieke E
2016-01-01
To estimate a time series model for multiple individuals, a multilevel model may be used. In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1) models, namely Maximum Likelihood Estimation (MLE) and Bayesian Markov Chain Monte Carlo. Furthermore, we examine the difference between modeling fixed and random individual parameters. To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25), the number of individuals per sample (10 or 25), the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3) and the standard deviation of the autocorrelation (0.25 or 0.40). We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators. The fixed estimators profit slightly more from a higher number of time points than the random estimators. When possible, random estimation is preferred to fixed estimation. The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates). Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures. PMID:27242559
Comparison of Estimation Procedures for Multilevel AR(1) Models
Krone, Tanja; Albers, Casper J.; Timmerman, Marieke E.
2016-01-01
To estimate a time series model for multiple individuals, a multilevel model may be used. In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1) models, namely Maximum Likelihood Estimation (MLE) and Bayesian Markov Chain Monte Carlo. Furthermore, we examine the difference between modeling fixed and random individual parameters. To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25), the number of individuals per sample (10 or 25), the mean of the autocorrelation (−0.6 to 0.6 inclusive, in steps of 0.3) and the standard deviation of the autocorrelation (0.25 or 0.40). We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators. The fixed estimators profit slightly more from a higher number of time points than the random estimators. When possible, random estimation is preferred to fixed estimation. The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates). Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures. PMID:27242559
Carlsten, B.E.; Haynes, W.B.
1996-08-01
The authors theoretically and numerically investigate the operation and behavior of the discrete monotron oscillator, a novel high-power microwave source. The discrete monotron differs from conventional monotrons and transit time oscillators by shielding the electron beam from the monotron cavity`s RF fields except at two distinct locations. This makes the discrete monotron act more like a klystron than a distributed traveling wave device. As a result, the oscillator has higher efficiency and can operate with higher beam powers than other single cavity oscillators and has more stable operation without requiring a seed input signal than mildly relativistic, intense-beam klystron oscillators.
Multilevel Latent Class Models with Dirichlet Mixing Distribution
Di, Chong-Zhi; Bandeen-Roche, Karen
2010-01-01
Summary Latent class analysis (LCA) and latent class regression (LCR) are widely used for modeling multivariate categorical outcomes in social science and biomedical studies. Standard analyses assume data of different respondents to be mutually independent, excluding application of the methods to familial and other designs in which participants are clustered. In this paper, we consider multilevel latent class models, in which subpopulation mixing probabilities are treated as random effects that vary among clusters according to a common Dirichlet distribution. We apply the Expectation-Maximization (EM) algorithm for model fitting by maximum likelihood (ML). This approach works well, but is computationally intensive when either the number of classes or the cluster size is large. We propose a maximum pairwise likelihood (MPL) approach via a modified EM algorithm for this case. We also show that a simple latent class analysis, combined with robust standard errors, provides another consistent, robust, but less efficient inferential procedure. Simulation studies suggest that the three methods work well in finite samples, and that the MPL estimates often enjoy comparable precision as the ML estimates. We apply our methods to the analysis of comorbid symptoms in the Obsessive Compulsive Disorder study. Our models' random effects structure has more straightforward interpretation than those of competing methods, thus should usefully augment tools available for latent class analysis of multilevel data. PMID:20560936
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1989-03-21
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration. 6 figs.
Planarization of metal films for multilevel interconnects
Tuckerman, David B.
1987-01-01
In the fabrication of multilevel integrated circuits, each metal layer is anarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, David B.
1989-01-01
In the fabrication of multilevel integrated circuits, each metal layer is anarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1985-08-23
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1985-06-24
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping lase pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Bound-state eigenenergy outside and inside the continuum for unstable multilevel systems
NASA Astrophysics Data System (ADS)
Miyamoto, Manabu
2005-12-01
The eigenvalue problem for the dressed bound state of unstable multilevel systems is examined both outside and inside the continuum, based on the N -level Friedrichs model, which describes the couplings between the discrete levels and the continuous spectrum. It is shown that a bound-state eigenenergy always exists below each of the discrete levels that lie outside the continuum. Furthermore, by strengthening the couplings gradually, the eigenenergy corresponding to each of the discrete levels inside the continuum finally emerges. On the other hand, the absence of the eigenenergy inside the continuum is proved in weak but finite coupling regimes, provided that each of the form factors that determine the transition between some definite level and the continuum does not vanish at that energy level. An application to the spontaneous emission process for the hydrogen atom interacting with the electromagnetic field is demonstrated.
ERIC Educational Resources Information Center
Peters, James V.
2004-01-01
Using the methods of finite difference equations the discrete analogue of the parabolic and catenary cable are analysed. The fibonacci numbers and the golden ratio arise in the treatment of the catenary.
Discretizations of axisymmetric systems
NASA Astrophysics Data System (ADS)
Frauendiener, Jörg
2002-11-01
In this paper we discuss stability properties of various discretizations for axisymmetric systems including the so-called cartoon method which was proposed by Alcubierre et al. for the simulation of such systems on Cartesian grids. We show that within the context of the method of lines such discretizations tend to be unstable unless one takes care in the way individual singular terms are treated. Examples are given for the linear axisymmetric wave equation in flat space.
Scalable Adaptive Multilevel Solvers for Multiphysics Problems
Xu, Jinchao
2014-12-01
In this project, we investigated adaptive, parallel, and multilevel methods for numerical modeling of various real-world applications, including Magnetohydrodynamics (MHD), complex fluids, Electromagnetism, Navier-Stokes equations, and reservoir simulation. First, we have designed improved mathematical models and numerical discretizaitons for viscoelastic fluids and MHD. Second, we have derived new a posteriori error estimators and extended the applicability of adaptivity to various problems. Third, we have developed multilevel solvers for solving scalar partial differential equations (PDEs) as well as coupled systems of PDEs, especially on unstructured grids. Moreover, we have integrated the study between adaptive method and multilevel methods, and made significant efforts and advances in adaptive multilevel methods of the multi-physics problems.
Bond, Stephen D.
2014-01-01
The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods who's cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.
NASA Astrophysics Data System (ADS)
Cusini, Matteo; van Kruijsdijk, Cor; Hajibeygi, Hadi
2016-06-01
This paper presents the development of an algebraic dynamic multilevel method (ADM) for fully implicit simulations of multiphase flow in homogeneous and heterogeneous porous media. Built on the fine-scale fully implicit (FIM) discrete system, ADM constructs a multilevel FIM system describing the coupled process on a dynamically defined grid of hierarchical nested topology. The multilevel adaptive resolution is determined at each time step on the basis of an error criterion. Once the grid resolution is established, ADM employs sequences of restriction and prolongation operators in order to map the FIM system across the considered resolutions. Several choices can be considered for prolongation (interpolation) operators, e.g., constant, bilinear and multiscale basis functions, all of which form partition of unity. The adaptive multilevel restriction operators, on the other hand, are constructed using a finite-volume scheme. This ensures mass conservation of the ADM solutions, and as such, the stability and accuracy of the simulations with multiphase transport. For several homogeneous and heterogeneous test cases, it is shown that ADM applies only a small fraction of the full FIM fine-scale grid cells in order to provide accurate solutions. The sensitivity of the solutions with respect to the employed fraction of grid cells (determined automatically based on the threshold value of the error criterion) is investigated for all test cases. ADM is a significant step forward in the application of dynamic local grid refinement methods, in the sense that it is algebraic, allows for systematic mapping across different scales, and applicable to heterogeneous test cases without any upscaling of fine-scale high resolution quantities. It also develops a novel multilevel multiscale method for FIM multiphase flow simulations in natural subsurface formations.
Alternative Methods for Assessing Mediation in Multilevel Data: The Advantages of Multilevel SEM
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zhang, Zhen; Zyphur, Michael J.
2011-01-01
Multilevel modeling (MLM) is a popular way of assessing mediation effects with clustered data. Two important limitations of this approach have been identified in prior research and a theoretical rationale has been provided for why multilevel structural equation modeling (MSEM) should be preferred. However, to date, no empirical evidence of MSEM's…
Discrete Newtonian cosmology: perturbations
NASA Astrophysics Data System (ADS)
Ellis, George F. R.; Gibbons, Gary W.
2015-03-01
In a previous paper (Gibbons and Ellis 2014 Discrete Newtonian cosmology Class. Quantum Grav. 31 025003), we showed how a finite system of discrete particles interacting with each other via Newtonian gravitational attraction would lead to precisely the same dynamical equations for homothetic motion as in the case of the pressure-free Friedmann-Lemaître-Robertson-Walker cosmological models of general relativity theory, provided the distribution of particles obeys the central configuration equation. In this paper we show that one can obtain perturbed such Newtonian solutions that give the same linearized structure growth equations as in the general relativity case. We also obtain the Dmitriev-Zel’dovich equations for subsystems in this discrete gravitational model, and show how it leads to the conclusion that voids have an apparent negative mass.
NASA Astrophysics Data System (ADS)
Barbiero, Alessandro
2015-12-01
Researchers in applied sciences are often concerned with multivariate random variables. In particular, multivariate discrete data often arise in many fields (statistical quality control, biostatistics, failure analysis, etc). Here we consider the discrete Weibull distribution as an alternative to the popular Poisson random variable and propose a procedure for simulating correlated discrete Weibull random variables, with marginal distributions and correlation matrix assigned by the user. The procedure indeed relies upon the gaussian copula model and an iterative algorithm for recovering the proper correlation matrix for the copula ensuring the desired correlation matrix on the discrete margins. A simulation study is presented, which empirically shows the performance of the procedure.
Liu, Xuzhou; Wang, Hehui; Zhou, Zhilai; Jin, Anmin
2014-02-01
The optimal surgical strategy for anterior or posterior approaches remains controversial for multilevel cervical compressive myelopathy caused by multisegment cervical spondylotic myelopathy (MCSM) or ossification of the posterior longitudinal ligament (OPLL). A systematic review and meta-analysis was conducted evaluating the clinical results of anterior decompression and fusion (ADF) compared with posterior laminoplasty for patients with multilevel cervical compressive myelopathy. PubMed, Embase, and the Cochrane Library were searched for randomized controlled trials and nonrandomized cohort studies conducted from 1990 to May 2013 comparing ADF with posterior laminoplasty for the treatment of multilevel cervical compressive myelopathy due to MCSM or OPLL. The following outcome measures were extracted: Japanese Orthopedic Association (JOA) score, recovery rate, complication rate, reoperation rate, blood loss, and operative time. Subgroup analysis was conducted according to the mean number of surgical segments. Eleven studies were included in the review, all of which were prospective or retrospective cohort studies with relatively low quality indicated by GRADE Working Group assessment. A definitive conclusion could not be reached regarding which surgical approach is more effective for the treatment of multilevel cervical compressive myelopathy. Although ADF was associated with better postoperative neural function than posterior laminoplasty in the treatment of multilevel cervical compressive myelopathy due to MCSM or OPLL, there was no apparent difference in the neural function recovery rate between the 2 approaches. Higher rates of surgery-related complication and reoperation should be taken into consideration when ADF is used for patients with multilevel cervical compressive myelopathy. The surgical trauma associated with corpectomy was significantly higher than that associated with posterior laminoplasty. PMID:24679196
Statistical power of multilevel modelling in dental caries clinical trials: a simulation study.
Burnside, G; Pine, C M; Williamson, P R
2014-01-01
Outcome data from dental caries clinical trials have a naturally hierarchical structure, with surfaces clustered within teeth, clustered within individuals. Data are often aggregated into the DMF index for each individual, losing tooth- and surface-specific information. If these data are to be analysed by tooth or surface, allowing exploration of effects of interventions on different teeth and surfaces, appropriate methods must be used to adjust for the clustered nature of the data. Multilevel modelling allows analysis of clustered data using individual observations without aggregating data, and has been little used in the field of dental caries. A simulation study was conducted to investigate the performance of multilevel modelling methods and standard caries increment analysis. Data sets were simulated from a three-level binomial distribution based on analysis of a caries clinical trial in Scottish adolescents, with varying sample sizes, treatment effects and random tooth level effects based on trials reported in Cochrane reviews of topical fluoride, and analysed to compare the power of multilevel models and traditional analysis. 40,500 data sets were simulated. Analysis showed that estimated power for the traditional caries increment method was similar to that for multilevel modelling, with more variation in smaller data sets. Multilevel modelling may not allow significant reductions in the number of participants required in a caries clinical trial, compared to the use of traditional analyses, but investigators interested in exploring the effect of their intervention in more detail may wish to consider the application of multilevel modelling to their clinical trial data. PMID:24216573
NASA Astrophysics Data System (ADS)
Arzano, Michele; Kowalski-Glikman, Jerzy
2016-09-01
We construct discrete symmetry transformations for deformed relativistic kinematics based on group valued momenta. We focus on the specific example of κ-deformations of the Poincaré algebra with associated momenta living on (a sub-manifold of) de Sitter space. Our approach relies on the description of quantum states constructed from deformed kinematics and the observable charges associated with them. The results we present provide the first step towards the analysis of experimental bounds on the deformation parameter κ to be derived via precision measurements of discrete symmetries and CPT.
Discrete breathers in crystals
NASA Astrophysics Data System (ADS)
Dmitriev, S. V.; Korznikova, E. A.; Baimova, Yu A.; Velarde, M. G.
2016-05-01
It is well known that periodic discrete defect-containing systems, in addition to traveling waves, support vibrational defect-localized modes. It turned out that if a periodic discrete system is nonlinear, it can support spatially localized vibrational modes as exact solutions even in the absence of defects. Since the nodes of the system are all on equal footing, it is only through the special choice of initial conditions that a group of nodes can be found on which such a mode, called a discrete breather (DB), will be excited. The DB frequency must be outside the frequency range of the small-amplitude traveling waves. Not resonating with and expending no energy on the excitation of traveling waves, a DB can theoretically conserve its vibrational energy forever provided no thermal vibrations or other perturbations are present. Crystals are nonlinear discrete systems, and the discovery in them of DBs was only a matter of time. It is well known that periodic discrete defect-containing systems support both traveling waves and vibrational defect-localized modes. It turns out that if a periodic discrete system is nonlinear, it can support spatially localized vibrational modes as exact solutions even in the absence of defects. Because the nodes of the system are all on equal footing, only a special choice of the initial conditions allows selecting a group of nodes on which such a mode, called a discrete breather (DB), can be excited. The DB frequency must be outside the frequency range of small-amplitude traveling waves. Not resonating with and expending no energy on the excitation of traveling waves, a DB can theoretically preserve its vibrational energy forever if no thermal vibrations or other perturbations are present. Crystals are nonlinear discrete systems, and the discovery of DBs in them was only a matter of time. Experimental studies of DBs encounter major technical difficulties, leaving atomistic computer simulations as the primary investigation tool. Despite
Students' Misconceptions about Random Variables
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2012-01-01
This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)
Multilevel intervention research: lessons learned and pathways forward.
Clauser, Steven B; Taplin, Stephen H; Foster, Mary K; Fagan, Pebbles; Kaluzny, Arnold D
2012-05-01
This summary reflects on this monograph regarding multilevel intervention (MLI) research to 1) assess its added value; 2) discuss what has been learned to date about its challenges in cancer care delivery; and 3) identify specific ways to improve its scientific soundness, feasibility, policy relevance, and research agenda. The 12 submitted chapters, and discussion of them at the March 2011 multilevel meeting, were reviewed and discussed among the authors to elicit key findings and results addressing the questions raised at the outset of this effort. MLI research is underrepresented as an explicit focus in the cancer literature but may improve implementation of studies of cancer care delivery if they assess contextual, organizational, and environmental factors important to understanding behavioral and/or system-level interventions. The field lacks a single unifying theory, although several psychological or biological theories are useful, and an ecological model helps conceptualize and communicate interventions. MLI research designs are often complex, involving nonlinear and nonhierarchical relationships that may not be optimally studied in randomized designs. Simulation modeling and pilot studies may be necessary to evaluate MLI interventions. Measurement and evaluation of team and organizational interventions are especially needed in cancer care, as are attention to the context of health-care reform, eHealth technology, and genomics-based medicine. Future progress in MLI research requires greater attention to developing and supporting relevant metrics of level effects and interactions and evaluating MLI interventions. MLI research holds an unrealized promise for understanding how to improve cancer care delivery. PMID:22623606
Multilevel Intervention Research: Lessons Learned and Pathways Forward
Taplin, Stephen H.; Foster, Mary K.; Fagan, Pebbles; Kaluzny, Arnold D.
2012-01-01
This summary reflects on this monograph regarding multilevel intervention (MLI) research to 1) assess its added value; 2) discuss what has been learned to date about its challenges in cancer care delivery; and 3) identify specific ways to improve its scientific soundness, feasibility, policy relevance, and research agenda. The 12 submitted chapters, and discussion of them at the March 2011 multilevel meeting, were reviewed and discussed among the authors to elicit key findings and results addressing the questions raised at the outset of this effort. MLI research is underrepresented as an explicit focus in the cancer literature but may improve implementation of studies of cancer care delivery if they assess contextual, organizational, and environmental factors important to understanding behavioral and/or system-level interventions. The field lacks a single unifying theory, although several psychological or biological theories are useful, and an ecological model helps conceptualize and communicate interventions. MLI research designs are often complex, involving nonlinear and nonhierarchical relationships that may not be optimally studied in randomized designs. Simulation modeling and pilot studies may be necessary to evaluate MLI interventions. Measurement and evaluation of team and organizational interventions are especially needed in cancer care, as are attention to the context of health-care reform, eHealth technology, and genomics-based medicine. Future progress in MLI research requires greater attention to developing and supporting relevant metrics of level effects and interactions and evaluating MLI interventions. MLI research holds an unrealized promise for understanding how to improve cancer care delivery. PMID:22623606
GPU-based Multilevel Clustering.
Chiosa, Iurie; Kolb, Andreas
2010-04-01
The processing power of parallel co-processors like the Graphics Processing Unit (GPU) are dramatically increasing. However, up until now only a few approaches have been presented to utilize this kind of hardware for mesh clustering purposes. In this paper we introduce a Multilevel clustering technique designed as a parallel algorithm and solely implemented on the GPU. Our formulation uses the spatial coherence present in the cluster optimization and hierarchical cluster merging to significantly reduce the number of comparisons in both parts . Our approach provides a fast, high quality and complete clustering analysis. Furthermore, based on the original concept we present a generalization of the method to data clustering. All advantages of the meshbased techniques smoothly carry over to the generalized clustering approach. Additionally, this approach solves the problem of the missing topological information inherent to general data clustering and leads to a Local Neighbors k-means algorithm. We evaluate both techniques by applying them to Centroidal Voronoi Diagram (CVD) based clustering. Compared to classical approaches, our techniques generate results with at least the same clustering quality. Our technique proves to scale very well, currently being limited only by the available amount of graphics memory. PMID:20421676
Multilevel Complex Networks and Systems
NASA Astrophysics Data System (ADS)
Caldarelli, Guido
2014-03-01
Network theory has been a powerful tool to model isolated complex systems. However, the classical approach does not take into account the interactions often present among different systems. Hence, the scientific community is nowadays concentrating the efforts on the foundations of new mathematical tools for understanding what happens when multiple networks interact. The case of economic and financial networks represents a paramount example of multilevel networks. In the case of trade, trade among countries the different levels can be described by the different granularity of the trading relations. Indeed, we have now data from the scale of consumers to that of the country level. In the case of financial institutions, we have a variety of levels at the same scale. For example one bank can appear in the interbank networks, ownership network and cds networks in which the same institution can take place. In both cases the systemically important vertices need to be determined by different procedures of centrality definition and community detection. In this talk I will present some specific cases of study related to these topics and present the regularities found. Acknowledged support from EU FET Project ``Multiplex'' 317532.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Applications of cascade multilevel inverters.
Peng, Fang-zen; Qian, Zhao-ming
2003-01-01
Cascade multilevel inverters have been developed for electric utility applications. A cascade M-level inverter consists of (M-1)/2 H-bridges in which each bridge's dc voltage is supported by its own dc capacitor. The new inverter can: (1) generate almost sinusoidal waveform voltage while only switching one time per fundamental cycle; (2) dispense with multi-pulse inverters' transformers used in conventional utility interfaces and static var compensators; (3) enables direct parallel or series transformer-less connection to medium- and high-voltage power systems. In short, the cascade inverter is much more efficient and suitable for utility applications than traditional multi-pulse and pulse width modulation (PWM) inverters. The authors have experimentally demonstrated the superiority of the new inverter for power supply, (hybrid) electric vehicle (EV) motor drive, reactive power (var) and harmonic compensation. This paper summarizes the features, feasibility, and control schemes of the cascade inverter for utility applications including utility interface of renewable energy, voltage regulation, var compensation, and harmonic filtering in power systems. Analytical, simulated, and experimental results demonstrated the superiority of the new inverters. PMID:14566981
ERIC Educational Resources Information Center
Sharp, Karen Tobey
This paper cites information received from a number of sources, e.g., mathematics teachers in two-year colleges, publishers, and convention speakers, about the nature of discrete mathematics and about what topics a course in this subject should contain. Note is taken of the book edited by Ralston and Young which discusses the future of college…
Momentum conservation in Multi-Level Multi-Domain (MLMD) simulations
NASA Astrophysics Data System (ADS)
Innocenti, M. E.; Beck, A.; Markidis, S.; Lapenta, G.
2016-05-01
Momentum conservation and self-forces reduction are challenges for all Particle-In-Cell (PIC) codes using spatial discretization schemes which do not fulfill the requirement of translational invariance of the grid Green's function. We comment here on the topic applied to the recently developed Multi-Level Multi-Domain (MLMD) method. The MLMD is a semi-implicit method for PIC plasma simulations. The multi-scale nature of plasma processes is addressed by using grids with different spatial resolutions in different parts of the domain.
Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie
2016-03-01
In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989
A multilevel preconditioner for domain decomposition boundary systems
Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.
1991-12-11
In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.
Wong, May C M; Lam, K F; Lo, Edward C M
2006-02-15
In some controlled clinical trials in dental research, multiple failure time data from the same patient are frequently observed that result in clustered multiple failure time. Moreover, the treatments are often delivered by more than one operator and thus the multiple failure times are clustered according to a multilevel structure when the operator effects are assumed to be random. In practice, it is often too expensive or even impossible to monitor the study subjects continuously, but they are examined periodically at some regular pre-scheduled visits. Hence, discrete or grouped clustered failure time data are collected. The aim of this paper is to illustrate the use of the Monte Carlo Markov chain (MCMC) approach and non-informative prior in a Bayesian framework to mimic the maximum likelihood (ML) estimation in a frequentist approach in multilevel modelling of clustered grouped survival data. A three-level model with additive variance components model for the random effects is considered in this paper. Both the grouped proportional hazards model and the dynamic logistic regression model are used. The approximate intra-cluster correlation of the log failure times can be estimated when the grouped proportional hazards model is used. The statistical package WinBUGS is adopted to estimate the parameter of interest based on the MCMC method. The models and method are applied to a data set obtained from a prospective clinical study on a cohort of Chinese school children that atraumatic restorative treatment (ART) restorations were placed on permanent teeth with carious lesions. Altogether 284 ART restorations were placed by five dentists and clinical status of the ART restorations was evaluated annually for 6 years after placement, thus clustered grouped failure times of the restorations were recorded. Results based on the grouped proportional hazards model revealed that clustering effect among the log failure times of the different restorations from the same child was
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Winkler, Anderson M.; Webster, Matthew A.; Vidaurre, Diego; Nichols, Thomas E.; Smith, Stephen M.
2015-01-01
Under weak and reasonable assumptions, mainly that data are exchangeable under the null hypothesis, permutation tests can provide exact control of false positives and allow the use of various non-standard statistics. There are, however, various common examples in which global exchangeability can be violated, including paired tests, tests that involve repeated measurements, tests in which subjects are relatives (members of pedigrees) — any dataset with known dependence among observations. In these cases, some permutations, if performed, would create data that would not possess the original dependence structure, and thus, should not be used to construct the reference (null) distribution. To allow permutation inference in such cases, we test the null hypothesis using only a subset of all otherwise possible permutations, i.e., using only the rearrangements of the data that respect exchangeability, thus retaining the original joint distribution unaltered. In a previous study, we defined exchangeability for blocks of data, as opposed to each datum individually, then allowing permutations to happen within block, or the blocks as a whole to be permuted. Here we extend that notion to allow blocks to be nested, in a hierarchical, multi-level definition. We do not explicitly model the degree of dependence between observations, only the lack of independence; the dependence is implicitly accounted for by the hierarchy and by the permutation scheme. The strategy is compatible with heteroscedasticity and variance groups, and can be used with permutations, sign flippings, or both combined. We evaluate the method for various dependence structures, apply it to real data from the Human Connectome Project (HCP) as an example application, show that false positives can be avoided in such cases, and provide a software implementation of the proposed approach. PMID:26074200
Incorporating Mobility in Growth Modeling for Multilevel and Longitudinal Item Response Data.
Choi, In-Hee; Wilson, Mark
2016-01-01
Multilevel data often cannot be represented by the strict form of hierarchy typically assumed in multilevel modeling. A common example is the case in which subjects change their group membership in longitudinal studies (e.g., students transfer schools; employees transition between different departments). In this study, cross-classified and multiple membership models for multilevel and longitudinal item response data (CCMM-MLIRD) are developed to incorporate such mobility, focusing on students' school change in large-scale longitudinal studies. Furthermore, we investigate the effect of incorrectly modeling school membership in the analysis of multilevel and longitudinal item response data. Two types of school mobility are described, and corresponding models are specified. Results of the simulation studies suggested that appropriate modeling of the two types of school mobility using the CCMM-MLIRD yielded good recovery of the parameters and improvement over models that did not incorporate mobility properly. In addition, the consequences of incorrectly modeling the school effects on the variance estimates of the random effects and the standard errors of the fixed effects depended upon mobility patterns and model specifications. Two sets of large-scale longitudinal data are analyzed to illustrate applications of the CCMM-MLIRD for each type of school mobility. PMID:26881961
Automatic multilevel medical image annotation and retrieval.
Mueen, A; Zainuddin, R; Baba, M Sapiyan
2008-09-01
Image retrieval at the semantic level mostly depends on image annotation or image classification. Image annotation performance largely depends on three issues: (1) automatic image feature extraction; (2) a semantic image concept modeling; (3) algorithm for semantic image annotation. To address first issue, multilevel features are extracted to construct the feature vector, which represents the contents of the image. To address second issue, domain-dependent concept hierarchy is constructed for interpretation of image semantic concepts. To address third issue, automatic multilevel code generation is proposed for image classification and multilevel image annotation. We make use of the existing image annotation to address second and third issues. Our experiments on a specific domain of X-ray images have given encouraging results. PMID:17846834
The discrete regime of flame propagation
NASA Astrophysics Data System (ADS)
Tang, Francois-David; Goroshin, Samuel; Higgins, Andrew
The propagation of laminar dust flames in iron dust clouds was studied in a low-gravity envi-ronment on-board a parabolic flight aircraft. The elimination of buoyancy-induced convection and particle settling permitted measurements of fundamental combustion parameters such as the burning velocity and the flame quenching distance over a wide range of particle sizes and in different gaseous mixtures. The discrete regime of flame propagation was observed by substitut-ing nitrogen present in air with xenon, an inert gas with a significantly lower heat conductivity. Flame propagation in the discrete regime is controlled by the heat transfer between neighbor-ing particles, rather than by the particle burning rate used by traditional continuum models of heterogeneous flames. The propagation mechanism of discrete flames depends on the spa-tial distribution of particles, and thus such flames are strongly influenced by local fluctuations in the fuel concentration. Constant pressure laminar dust flames were observed inside 70 cm long, 5 cm diameter Pyrex tubes. Equally-spaced plate assemblies forming rectangular chan-nels were placed inside each tube to determine the quenching distance defined as the minimum channel width through which a flame can successfully propagate. High-speed video cameras were used to measure the flame speed and a fiber optic spectrometer was used to measure the flame temperature. Experimental results were compared with predictions obtained from a numerical model of a three-dimensional flame developed to capture both the discrete nature and the random distribution of particles in the flame. Though good qualitative agreement was obtained between model predictions and experimental observations, residual g-jitters and the short reduced-gravity periods prevented further investigations of propagation limits in the dis-crete regime. The full exploration of the discrete flame phenomenon would require high-quality, long duration reduced gravity environment
Formulation and Application of the Generalized Multilevel Facets Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chih-Yu
2007-01-01
In this study, the authors develop a generalized multilevel facets model, which is not only a multilevel and two-parameter generalization of the facets model, but also a multilevel and facet generalization of the generalized partial credit model. Because the new model is formulated within a framework of nonlinear mixed models, no efforts are…
Discreteness induced extinction
NASA Astrophysics Data System (ADS)
dos Santos, Renato Vieira; da Silva, Linaena Méricy
2015-11-01
Two simple models based on ecological problems are discussed from the point of view of non-equilibrium statistical mechanics. It is shown how discrepant may be the results of the models that include spatial distribution with discrete interactions when compared with the continuous analogous models. In the continuous case we have, under certain circumstances, the population explosion. When we take into account the finiteness of the population, we get the opposite result, extinction. We will analyze how these results depend on the dimension d of the space and describe the phenomenon of the "Discreteness Inducing Extinction" (DIE). The results are interpreted in the context of the "paradox of sex", an old problem of evolutionary biology.
Multilevel transport solution of LWR reactor cores
Jose Ignacio Marquez Damian; Cassiano R.E. de Oliveira; HyeonKae Park
2008-09-01
This work presents a multilevel approach for the solution of the transport equation in typical LWR assemblies and core configurations. It is based on the second-order, even-parity formulation of the transport equation, which is solved within the framework provided by the finite element-spherical harmonics code EVENT. The performance of the new solver has been compared with that of the standard conjugate gradient solver for diffusion and transport problems on structured and unstruc-tured grids. Numerical results demonstrate the potential of the multilevel scheme for realistic reactor calculations.
Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra
NASA Astrophysics Data System (ADS)
Rezakhah, Saeid; Maleki, Yasaman
2016-07-01
Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.
Discrete Gust Model for Launch Vehicle Assessments
NASA Technical Reports Server (NTRS)
Leahy, Frank B.
2008-01-01
Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.
A paradigm for discrete physics
Noyes, H.P.; McGoveran, D.; Etter, T.; Manthey, M.J.; Gefwert, C.
1987-01-01
An example is outlined for constructing a discrete physics using as a starting point the insight from quantum physics that events are discrete, indivisible and non-local. Initial postulates are finiteness, discreteness, finite computability, absolute nonuniqueness (i.e., homogeneity in the absence of specific cause) and additivity.
NASA Astrophysics Data System (ADS)
Hsieh, E. R.; Chung, Steve S.
2015-12-01
The evolution of gate-current leakage path has been observed and depicted by RTN signals on metal-oxide-silicon field effect transistor with high-k gate dielectric. An experimental method based on gate-current random telegraph noise (Ig-RTN) technique was developed to observe the formation of gate-leakage path for the device under certain electrical stress, such as Bias Temperature Instability. The results show that the evolution of gate-current path consists of three stages. In the beginning, only direct-tunnelling gate current and discrete traps inducing Ig-RTN are observed; in the middle stage, interaction between traps and the percolation paths presents a multi-level gate-current variation, and finally two different patterns of the hard or soft breakdown path can be identified. These observations provide us a better understanding of the gate-leakage and its impact on the device reliability.
Bootstrap confidence intervals in multi-level simultaneous component analysis.
Timmerman, Marieke E; Kiers, Henk A L; Smilde, Age K; Ceulemans, Eva; Stouten, Jeroen
2009-05-01
Multi-level simultaneous component analysis (MLSCA) was designed for the exploratory analysis of hierarchically ordered data. MLSCA specifies a component model for each level in the data, where appropriate constraints express possible similarities between groups of objects at a certain level, yielding four MLSCA variants. The present paper discusses different bootstrap strategies for estimating confidence intervals (CIs) on the individual parameters. In selecting a proper strategy, the main issues to address are the resampling scheme and the non-uniqueness of the parameters. The resampling scheme depends on which level(s) in the hierarchy are considered random, and which fixed. The degree of non-uniqueness depends on the MLSCA variant, and, in two variants, the extent to which the user exploits the transformational freedom. A comparative simulation study examines the quality of bootstrap CIs of different MLSCA parameters. Generally, the quality of bootstrap CIs appears to be good, provided the sample sizes are sufficient at each level that is considered to be random. The latter implies that if more than a single level is considered random, the total number of observations necessary to obtain reliable inferential information increases dramatically. An empirical example illustrates the use of bootstrap CIs in MLSCA. PMID:18086338
Engineering applications of heuristic multilevel optimization methods
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.
1989-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Engineering applications of heuristic multilevel optimization methods
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.
1988-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Using Multilevel Modeling in Counseling Research
ERIC Educational Resources Information Center
Lynch, Martin F.
2012-01-01
This conceptual and practical overview of multilevel modeling (MLM) for researchers in counseling and development provides guidelines on setting up SPSS to perform MLM and an example of how to present the findings. It also provides a discussion on how counseling and developmental researchers can use MLM to address their own research questions.…
Constructions of Factorizable Multilevel Hadamard Matrices
NASA Astrophysics Data System (ADS)
Matsufuji, Shinya; Fan, Pingzhi
Factorization of Hadamard matrices can provide fast algorithm and facilitate efficient hardware realization. In this letter, constructions of factorizable multilevel Hadamard matrices, which can be considered as special case of unitary matrices, are inverstigated. In particular, a class of ternary Hadamard matrices, together with its application, is presented.
BER estimation for multilevel modulation formats
NASA Astrophysics Data System (ADS)
Louchet, Hadrien; Kuzmin, Konstantin; Koltchanov, Igor; Richter, André
2009-11-01
We review existing BER estimation methods and propose alternative methods to assess the performance of multilevel modulation formats with both direct and coherent detection. The impact of digital signal processing (DSP) on the BER estimation procedure is discussed for the latter case. The different approaches are illustrated by simulating exemplary transmission systems.
Single-Level and Multilevel Mediation Analysis
ERIC Educational Resources Information Center
Tofighi, Davood; Thoemmes, Felix
2014-01-01
Mediation analysis is a statistical approach used to examine how the effect of an independent variable on an outcome is transmitted through an intervening variable (mediator). In this article, we provide a gentle introduction to single-level and multilevel mediation analyses. Using single-level data, we demonstrate an application of structural…
New multilevel codes over GF(q)
NASA Technical Reports Server (NTRS)
Wu, Jiantian; Costello, Daniel J., Jr.
1992-01-01
Set partitioning to multi-dimensional signal spaces over GF(q), particularly GF sup q-1(q) and GF sup q (q), and show how to construct both multi-level block codes and multi-level trellis codes over GF(q). Two classes of multi-level (n, k, d) block codes over GF(q) with block length n, number of information symbols k, and minimum distance d sub min greater than or = d, are presented. These two classes of codes use Reed-Solomon codes as component codes. They can be easily decoded as block length q-1 Reed-Solomon codes or block length q or q + 1 extended Reed-Solomon codes using multi-stage decoding. Many of these codes have larger distances than comparable q-ary block codes, as component codes. Low rate q-ary convolutional codes, work error correcting convolutional codes, and binary-to-q-ary convolutional codes can also be used to construct multi-level trellis codes over GF(q) or binary-to-q-ary trellis codes, some of which have better performance than the above block codes. All of the new codes have simple decoding algorithms based on hard decision multi-stage decoding.
The Economic Cost of Homosexuality: Multilevel Analyses
ERIC Educational Resources Information Center
Baumle, Amanda K.; Poston, Dudley, Jr.
2011-01-01
This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…
Multilevel training of binary morphological operators.
Hirata, Nina S T
2009-04-01
The design of binary morphological operators that are translation-invariant and locally defined by a finite neighborhood window corresponds to the problem of designing Boolean functions. As in any supervised classification problem, morphological operators designed from training sample also suffer from overfitting. Large neighborhood tends to lead to performance degradation of the designed operator. This work proposes a multi-level design approach to deal with the issue of designing large neighborhood based operators. The main idea is inspired from stacked generalization (a multi-level classifier design approach) and consists in, at each training level, combining the outcomes of the previous level operators. The final operator is a multi-level operator that ultimately depends on a larger neighborhood than of the individual operators that have been combined. Experimental results show that two-level operators obtained by combining operators designed on subwindows of a large window consistently outperforms the single-level operators designed on the full window. They also show that iterating two-level operators is an effective multi-level approach to obtain better results. PMID:19229085
Multilevel Factor Models for Ordinal Variables
ERIC Educational Resources Information Center
Grilli, Leonardo; Rampichini, Carla
2007-01-01
This article tackles several issues involved in specifying, fitting, and interpreting the results of multilevel factor models for ordinal variables. First, the problem of model specification and identification is addressed, outlining parameter interpretation. Special attention is devoted to the consequences on interpretation stemming from the…
Efficiently Exploring Multilevel Data with Recursive Partitioning
ERIC Educational Resources Information Center
Martin, Daniel P.; von Oertzen, Timo; Rimm-Kaufman, Sara E.
2015-01-01
There is an increasing number of datasets with many participants, variables, or both, in education and other fields that often deal with large, multilevel data structures. Once initial confirmatory hypotheses are exhausted, it can be difficult to determine how best to explore the dataset to discover hidden relationships that could help to inform…
A Practical Guide to Multilevel Modeling
ERIC Educational Resources Information Center
Peugh, James L.
2010-01-01
Collecting data from students within classrooms or schools, and collecting data from students on multiple occasions over time, are two common sampling methods used in educational research that often require multilevel modeling (MLM) data analysis techniques to avoid Type-1 errors. The purpose of this article is to clarify the seven major steps…
Numerical valuation of discrete double barrier options
NASA Astrophysics Data System (ADS)
Milev, Mariyan; Tagliani, Aldo
2010-03-01
In the present paper we explore the problem for pricing discrete barrier options utilizing the Black-Scholes model for the random movement of the asset price. We postulate the problem as a path integral calculation by choosing approach that is similar to the quadrature method. Thus, the problem is reduced to the estimation of a multi-dimensional integral whose dimension corresponds to the number of the monitoring dates. We propose a fast and accurate numerical algorithm for its valuation. Our results for pricing discretely monitored one and double barrier options are in agreement with those obtained by other numerical and analytical methods in Finance and literature. A desired level of accuracy is very fast achieved for values of the underlying asset close to the strike price or the barriers. The method has a simple computer implementation and it permits observing the entire life of the option.
Discrete scale invariance in supercritical percolation
NASA Astrophysics Data System (ADS)
Schröder, Malte; Chen, Wei; Nagler, Jan
2016-01-01
Recently it has been demonstrated that the connectivity transition from microscopic connectivity to macroscopic connectedness, known as percolation, is generically announced by a cascade of microtransitions of the percolation order parameter (Chen et al 2014 Phys. Rev. Lett. 112 155701). Here we report the discovery of macrotransition cascades which follow percolation. The order parameter grows in discrete macroscopic steps with positions that can be randomly distributed even in the thermodynamic limit. These transition positions are, however, correlated and follow scaling laws which arise from discrete scale invariance (DSI) and non self-averaging, both traditionally unrelated to percolation. We reveal the DSI in ensemble measurements of these non self-averaging systems by rescaling of the individual realizations before averaging.
Multilevel method for modeling large-scale networks.
Safro, I. M.
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Xu, Hongwei; Logan, John R.; Short, Susan E.
2014-01-01
Research on neighborhoods and health increasingly acknowledges the need to conceptualize, measure, and model spatial features of social and physical environments. In ignoring underlying spatial dynamics, we run the risk of biased statistical inference and misleading results. In this paper, we propose an integrated multilevel-spatial approach for Poisson models of discrete responses. In an empirical example of child mortality in 1880 Newark, New Jersey, we compare this multilevel-spatial approach with the more typical aspatial multilevel approach. Results indicate that spatially-defined egocentric neighborhoods, or distance-based measures, outperform administrative areal units, such as census units. In addition, although results did not vary by specific definitions of egocentric neighborhoods, they were sensitive to geographic scale and modeling strategy. Overall, our findings confirm that adopting a spatial-multilevel approach enhances our ability to disentangle the effect of space from that of place, and point to the need for more careful spatial thinking in population research on neighborhoods and health. PMID:24763980
Lin, Paul T. Shadid, John N.; Sala, Marzio; Tuminaro, Raymond S.; Hennigan, Gary L.; Hoekstra, Robert J.
2009-09-20
In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system is obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 10{sup 8} unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system.
NASA Astrophysics Data System (ADS)
Lin, Paul T.; Shadid, John N.; Sala, Marzio; Tuminaro, Raymond S.; Hennigan, Gary L.; Hoekstra, Robert J.
2009-09-01
In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system is obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 108 unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system.
NASA Astrophysics Data System (ADS)
Kotulski, Zbigniew; Szczepaski, Janusz
In the paper we propose a new method of constructing cryptosystems utilising a nonpredictability property of discrete chaotic systems. We formulate the requirements for such systems to assure their safety. We also give examples of practical realisation of chaotic cryptosystems, using a generalisation of the method presented in [7]. The proposed algorithm of encryption and decryption is based on multiple iteration of a certain dynamical chaotic system. We assume that some part of the initial condition is a plain message. As the secret key we assume the system parameter(s) and additionally another part of the initial condition.
National INFOSEC technical baseline: multi-level secure systems
Anderson, J P
1998-09-28
The purpose of this report is to provide a baseline description of the state of multilevel processor/processing to the INFOSEC Research Council and at their discretion to the R&D community at large. From the information in the report, it is hoped that the members of the IRC will be aware of gaps in MLS research. A primary purpose is to bring IRC and the research community members up to date on what is happening in the MLS arena. The review will attempt to cover what MLS products are still available, and to identify companies who still offer MLS products. We have also attempted to identify requirements for MLS by interviewing senior officers of the Intelligence community as well as those elements of DoD and DOE who are or may be interested in procuring MLS products for various applications. The balance of the report consists of the following sections; a background review of the highlights of the developments of MLS, a quick summary of where we are today in terms of products, installations, and companies who are still in the business of supplying MLS systems [or who are developing MLS system], the requirements as expressed by senior members of the Intelligence community and DoD and DOE, issues and unmet R&D challenges surrounding MLS, and finally a set of recommended research topics.
NASA Technical Reports Server (NTRS)
Bates, J. R.; Moorthi, S.; Higgins, R. W.
1993-01-01
An adiabatic global multilevel primitive equation model using a two time-level, semi-Lagrangian semi-implicit finite-difference integration scheme is presented. A Lorenz grid is used for vertical discretization and a C grid for the horizontal discretization. The momentum equation is discretized in vector form, thus avoiding problems near the poles. The 3D model equations are reduced by a linear transformation to a set of 2D elliptic equations, whose solution is found by means of an efficient direct solver. The model (with minimal physics) is integrated for 10 days starting from an initialized state derived from real data. A resolution of 16 levels in the vertical is used, with various horizontal resolutions. The model is found to be stable and efficient, and to give realistic output fields. Integrations with time steps of 10 min, 30 min, and 1 h are compared, and the differences are found to be acceptable.
Pulse-modulated multilevel data storage in an organic ferroelectric resistive memory diode
NASA Astrophysics Data System (ADS)
Lee, Jiyoul; van Breemen, Albert J. J. M.; Khikhlovskyi, Vsevolod; Kemerink, Martijn; Janssen, Rene A. J.; Gelinck, Gerwin H.
2016-04-01
We demonstrate multilevel data storage in organic ferroelectric resistive memory diodes consisting of a phase-separated blend of P(VDF-TrFE) and a semiconducting polymer. The dynamic behaviour of the organic ferroelectric memory diode can be described in terms of the inhomogeneous field mechanism (IFM) model where the ferroelectric components are regarded as an assembly of randomly distributed regions with independent polarisation kinetics governed by a time-dependent local field. This allows us to write and non-destructively read stable multilevel polarisation states in the organic memory diode using controlled programming pulses. The resulting 2-bit data storage per memory element doubles the storage density of the organic ferroelectric resistive memory diode without increasing its technological complexity, thus reducing the cost per bit.
Pulse-modulated multilevel data storage in an organic ferroelectric resistive memory diode
Lee, Jiyoul; van Breemen, Albert J. J. M.; Khikhlovskyi, Vsevolod; Kemerink, Martijn; Janssen, Rene A. J.; Gelinck, Gerwin H.
2016-01-01
We demonstrate multilevel data storage in organic ferroelectric resistive memory diodes consisting of a phase-separated blend of P(VDF-TrFE) and a semiconducting polymer. The dynamic behaviour of the organic ferroelectric memory diode can be described in terms of the inhomogeneous field mechanism (IFM) model where the ferroelectric components are regarded as an assembly of randomly distributed regions with independent polarisation kinetics governed by a time-dependent local field. This allows us to write and non-destructively read stable multilevel polarisation states in the organic memory diode using controlled programming pulses. The resulting 2-bit data storage per memory element doubles the storage density of the organic ferroelectric resistive memory diode without increasing its technological complexity, thus reducing the cost per bit. PMID:27080264
Williams, Nathaniel J
2016-09-01
A step toward the development of optimally effective, efficient, and feasible implementation strategies that increase evidence-based treatment integration in mental health services involves identification of the multilevel mechanisms through which these strategies influence implementation outcomes. This article (a) provides an orientation to, and rationale for, consideration of multilevel mediating mechanisms in implementation trials, and (b) systematically reviews randomized controlled trials that examined mediators of implementation strategies in mental health. Nine trials were located. Mediation-related methodological deficiencies were prevalent and no trials supported a hypothesized mediator. The most common reason was failure to engage the mediation target. Discussion focuses on directions to accelerate implementation strategy development in mental health. PMID:26474761
NASA Astrophysics Data System (ADS)
Calogero, Francesco
2011-08-01
The original continuous-time ''goldfish'' dynamical system is characterized by two neat formulas, the first of which provides the N Newtonian equations of motion of this dynamical system, while the second provides the solution of the corresponding initial-value problem. Several other, more general, solvable dynamical systems ''of goldfish type'' have been identified over time, featuring, in the right-hand (''forces'') side of their Newtonian equations of motion, in addition to other contributions, a velocity-dependent term such as that appearing in the right-hand side of the first formula mentioned above. The solvable character of these models allows detailed analyses of their behavior, which in some cases is quite remarkable (for instance isochronous or asymptotically isochronous). In this paper we introduce and discuss various discrete-time dynamical systems, which are as well solvable, which also display interesting behaviors (including isochrony and asymptotic isochrony) and which reduce to dynamical systems of goldfish type in the limit when the discrete-time independent variable l=0,1,2,... becomes the standard continuous-time independent variable t, 0≤t<∞.
Discrete Pathophysiology is Uncommon in Patients with Nonspecific Arm Pain
Kortlever, Joost T.P.; Janssen, Stein J.; Molleman, Jeroen; Hageman, Michiel G.J.S.; Ring, David
2016-01-01
Background: Nonspecific symptoms are common in all areas of medicine. Patients and caregivers can be frustrated when an illness cannot be reduced to a discrete pathophysiological process that corresponds with the symptoms. We therefore asked the following questions: 1) Which demographic factors and psychological comorbidities are associated with change from an initial diagnosis of nonspecific arm pain to eventual identification of discrete pathophysiology that corresponds with symptoms? 2) What is the percentage of patients eventually diagnosed with discrete pathophysiology, what are those pathologies, and do they account for the symptoms? Methods: We evaluated 634 patients with an isolated diagnosis of nonspecific upper extremity pain to see if discrete pathophysiology was diagnosed on subsequent visits to the same hand surgeon, a different hand surgeon, or any physician within our health system for the same pain. Results: There were too few patients with discrete pathophysiology at follow-up to address the primary study question. Definite discrete pathophysiology that corresponded with the symptoms was identified in subsequent evaluations by the index surgeon in one patient (0.16% of all patients) and cured with surgery (nodular fasciitis). Subsequent doctors identified possible discrete pathophysiology in one patient and speculative pathophysiology in four patients and the index surgeon identified possible discrete pathophysiology in four patients, but the five discrete diagnoses accounted for only a fraction of the symptoms. Conclusion: Nonspecific diagnoses are not harmful. Prospective randomized research is merited to determine if nonspecific, descriptive diagnoses are better for patients than specific diagnoses that imply pathophysiology in the absence of discrete verifiable pathophysiology. PMID:27517064
Noyes, H.P. ); Starson, S. )
1991-03-01
Discrete physics, because it replaces time evolution generated by the energy operator with a global bit-string generator (program universe) and replaces fields'' with the relativistic Wheeler-Feynman action at a distance,'' allows the consistent formulation of the concept of signed gravitational charge for massive particles. The resulting prediction made by this version of the theory is that free anti-particles near the surface of the earth will fall'' up with the same acceleration that the corresponding particles fall down. So far as we can see, no current experimental information is in conflict with this prediction of our theory. The experiment crusis will be one of the anti-proton or anti-hydrogen experiments at CERN. Our prediction should be much easier to test than the small effects which those experiments are currently designed to detect or bound. 23 refs.
Discrete Sibson interpolation.
Park, Sung W; Linsen, Lars; Kreylos, Oliver; Owens, John D; Hamann, Bernd
2006-01-01
Natural-neighbor interpolation methods, such as Sibson's method, are well-known schemes for multivariate data fitting and reconstruction. Despite its many desirable properties, Sibson's method is computationally expensive and difficult to implement, especially when applied to higher-dimensional data. The main reason for both problems is the method's implementation based on a Voronoi diagram of all data points. We describe a discrete approach to evaluating Sibson's interpolant on a regular grid, based solely on finding nearest neighbors and rendering and blending d-dimensional spheres. Our approach does not require us to construct an explicit Voronoi diagram, is easily implemented using commodity three-dimensional graphics hardware, leads to a significant speed increase compared to traditional approaches, and generalizes easily to higher dimensions. For large scattered data sets, we achieve two-dimensional (2D) interpolation at interactive rates and 3D interpolation (3D) with computation times of a few seconds. PMID:16509383
Immigration and Prosecutorial Discretion
Apollonio, Dorie; Lochner, Todd; Heddens, Myriah
2015-01-01
Immigration has become an increasingly salient national issue in the US, and the Department of Justice recently increased federal efforts to prosecute immigration offenses. This shift, however, relies on the cooperation of US attorneys and their assistants. Traditionally federal prosecutors have enjoyed enormous discretion and have been responsive to local concerns. To consider how the centralized goal of immigration enforcement may have influenced federal prosecutors in regional offices, we review their prosecution of immigration offenses in California using over a decade's worth of data. Our findings suggest that although centralizing forces influence immigration prosecutions, individual US attorneys' offices retain distinct characteristics. Local factors influence federal prosecutors' behavior in different ways depending on the office. Contrary to expectations, unemployment rates did not affect prosecutors' willingness to pursue immigration offenses, nor did local popular opinion about illegal immigration. PMID:26146530
Discrete Pearson distributions
Bowman, K.O.; Shenton, L.R.; Kastenbaum, M.A.
1991-11-01
These distributions are generated by a first order recursive scheme which equates the ratio of successive probabilities to the ratio of two corresponding quadratics. The use of a linearized form of this model will produce equations in the unknowns matched by an appropriate set of moments (assumed to exist). Given the moments we may find valid solutions. These are two cases; (1) distributions defined on the non-negative integers (finite or infinite) and (2) distributions defined on negative integers as well. For (1), given the first four moments, it is possible to set this up as equations of finite or infinite degree in the probability of a zero occurrence, the sth component being a product of s ratios of linear forms in this probability in general. For (2) the equation for the zero probability is purely linear but may involve slowly converging series; here a particular case is the discrete normal. Regions of validity are being studied. 11 refs.
Discrete stability in stochastic programming
Lepp, R.
1994-12-31
In this lecture we study stability properties of stochastic programs with recourse where the probability measure is approximated by a sequence of weakly convergent discrete measures. Such discrete approximation approach gives us a possibility to analyze explicitly the behavior of the second stage correction function. The approach is based on modern functional analytical methods of an approximation of extremum problems in function spaces, especially on the notion of the discrete convergence of vectors to an essentially bounded measurable function.
Dunn, Erin C.; Masyn, Katherine E.; Yudron, Monica; Jones, Stephanie M.; Subramanian, S.V.
2014-01-01
The observation that features of the social environment, including family, school, and neighborhood characteristics, are associated with individual-level outcomes has spurred the development of dozens of multilevel or ecological theoretical frameworks in epidemiology, public health, psychology, and sociology, among other disciplines. Despite the widespread use of such theories in etiological, intervention, and policy studies, challenges remain in bridging multilevel theory and empirical research. This paper set out to synthesize these challenges and provide specific examples of methodological and analytical strategies researchers are using to gain a more nuanced understanding of the social determinants of psychiatric disorders, with a focus on children’s mental health. To accomplish this goal, we begin by describing multilevel theories, defining their core elements, and discussing what these theories suggest is needed in empirical work. In the second part, we outline the main challenges researchers face in translating multilevel theory into research. These challenges are presented for each stage of the research process. In the third section, we describe two methods being used as alternatives to traditional multilevel modeling techniques to better bridge multilevel theory and multilevel research. These are: (1) multilevel factor analysis and multilevel structural equation modeling; and (2) dynamic systems approaches. Through its review of multilevel theory, assessment of existing strategies, and examination of emerging methodologies, this paper offers a framework to evaluate and guide empirical studies on the social determinants of child psychiatric disorders as well as health across the lifecourse. PMID:24469555
Extended digital image correlation method for analysis of discrete discontinuity
NASA Astrophysics Data System (ADS)
Deb, Debasis; Bhattacharjee, Sudipta
2015-11-01
Finite element based multilevel extended digital image correlation (X-DIC) method is applied to obtain displacement distribution of an object having a discrete discontinuity. The principle of multilevel X-DIC method is described in the paper and results are verified using numerically generated images. The deformed images are developed with a pre-existing discontinuity surface across the image for tensile or shear displacements or rotations. Several cubical rock samples are also compressed under uniaxial loading conditions until fractures are developed in the post-failure region. Images of a speckled face of this experiment are analyzed using the proposed X-DIC method with increment of loading for determination of displacement before and after cracks are developed in the sample. The results of this study show that X-DIC technique is capable of capturing damaged zone(s) and displacement jump across the discontinuity plane as well as indicating the onset of failure of rock sample. This method demonstrates the applicability to investigate object failure mechanism for the entire surface of the sample in a non-contact manner.
Computational modeling and multilevel cancer control interventions.
Morrissey, Joseph P; Lich, Kristen Hassmiller; Price, Rebecca Anhang; Mandelblatt, Jeanne
2012-05-01
This chapter presents an overview of computational modeling as a tool for multilevel cancer care and intervention research. Model-based analyses have been conducted at various "beneath the skin" or biological scales as well as at various "above the skin" or socioecological levels of cancer care delivery. We review the basic elements of computational modeling and illustrate its applications in four cancer control intervention areas: tobacco use, colorectal cancer screening, cervical cancer screening, and racial disparities in access to breast cancer care. Most of these models have examined cancer processes and outcomes at only one or two levels. We suggest ways these models can be expanded to consider interactions involving three or more levels. Looking forward, a number of methodological, structural, and communication barriers must be overcome to create useful computational models of multilevel cancer interventions and population health. PMID:22623597
Multilevel Inverters for Electric Vehicle Applications
Habetler, T.G.; Peng, F.Z.; Tolbert, L.M.
1998-10-22
This paper presents multilevel inverters as an application for all-electric vehicle (EV) and hybrid-electric vehicle (HEV) motor drives. Diode-clamped inverters and cascaded H-bridge inverters, (1) can generate near-sinusoidal voltages with only fundamental frequency switching; (2) have almost no electromagnetic interference (EMI) and common-mode voltage; and (3) make an EV more accessible/safer and open wiring possible for most of an EV'S power system. This paper explores the benefits and discusses control schemes of the cascade inverter for use as an EV motor drive or a parallel HEV drive and the diode-clamped inverter as a series HEV motor drive. Analytical, simulated, and experimental results show the superiority of these multilevel inverters for this new niche.
On the effectiveness of multilevel selection.
Goodnight, Charles J
2016-01-01
Experimental studies of group selection show that higher levels of selection act on indirect genetic effects, making the response to group and community selection qualitatively different from that of individual selection. This suggests that multilevel selection plays a key role in the evolution of supersocial societies. Experiments showing the effectiveness of community selection indicate that we should consider the possibility that selection among communities may be important in the evolution of supersocial species. PMID:27562604
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.
Automatic Multilevel Parallelization Using OpenMP
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)
2002-01-01
In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.
Voltage balanced multilevel voltage source converter system
Peng, F.Z.; Lai, J.S.
1997-07-01
Disclosed is a voltage balanced multilevel converter for high power AC applications such as adjustable speed motor drives and back-to-back DC intertie of adjacent power systems. This converter provides a multilevel rectifier, a multilevel inverter, and a DC link between the rectifier and the inverter allowing voltage balancing between each of the voltage levels within the multilevel converter. The rectifier is equipped with at least one phase leg and a source input node for each of the phases. The rectifier is further equipped with a plurality of rectifier DC output nodes. The inverter is equipped with at least one phase leg and a load output node for each of the phases. The inverter is further equipped with a plurality of inverter DC input nodes. The DC link is equipped with a plurality of rectifier charging means and a plurality of inverter discharging means. The plurality of rectifier charging means are connected in series with one of the rectifier charging means disposed between and connected in an operable relationship with each adjacent pair of rectifier DC output nodes. The plurality of inverter discharging means are connected in series with one of the inverter discharging means disposed between and connected in an operable relationship with each adjacent pair of inverter DC input nodes. Each of said rectifier DC output nodes are individually electrically connected to the respective inverter DC input nodes. By this means, each of the rectifier DC output nodes and each of the inverter DC input nodes are voltage balanced by the respective charging and discharging of the rectifier charging means and the inverter discharging means. 15 figs.
Voltage balanced multilevel voltage source converter system
Peng, Fang Zheng; Lai, Jih-Sheng
1997-01-01
A voltage balanced multilevel converter for high power AC applications such as adjustable speed motor drives and back-to-back DC intertie of adjacent power systems. This converter provides a multilevel rectifier, a multilevel inverter, and a DC link between the rectifier and the inverter allowing voltage balancing between each of the voltage levels within the multilevel converter. The rectifier is equipped with at least one phase leg and a source input node for each of the phases. The rectifier is further equipped with a plurality of rectifier DC output nodes. The inverter is equipped with at least one phase leg and a load output node for each of the phases. The inverter is further equipped with a plurality of inverter DC input nodes. The DC link is equipped with a plurality of rectifier charging means and a plurality of inverter discharging means. The plurality of rectifier charging means are connected in series with one of the rectifier charging means disposed between and connected in an operable relationship with each adjacent pair of rectifier DC output nodes. The plurality of inverter discharging means are connected in series with one of the inverter discharging means disposed between and connected in an operable relationship with each adjacent pair of inverter DC input nodes. Each of said rectifier DC output nodes are individually electrically connected to the respective inverter DC input nodes. By this means, each of the rectifier DC output nodes and each of the inverter DC input nodes are voltage balanced by the respective charging and discharging of the rectifier charging means and the inverter discharging means.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597
Multilevel sparse functional principal component analysis
Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.
2014-01-01
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597
Fluctuations and discreteness in diffusion limited growth
NASA Astrophysics Data System (ADS)
Devita, Jason P.
This thesis explores the effects of fluctuations and discreteness on the growth of physical systems where diffusion plays an important role. It focuses on three related problems, all dependent on diffusion in a fundamental way, but each with its own unique challenges. With diffusion-limited aggregation (DLA), the relationship between noisy and noise-free Laplacian growth is probed by averaging the results of noisy growth. By doing so in a channel geometry, we are able to compare to known solutions of the noise-free problem. We see that while the two are comparable, there are discrepancies which are not well understood. In molecular beam epitaxy (MBE), we create efficient computational algorithms, by replacing random walkers (diffusing atoms) with approximately equivalent processes. In one case, the atoms are replaced by a continuum field. Solving for the dynamics of the field yields---in an average sense---the dynamics of the atoms. In the other case, the atoms are treated as individual random-walking particles, but the details of the dynamics are changed to an (approximately) equivalent set of dynamics. This approach involves allowing adatoms to take long hops. We see approximately an order of magnitude speed up for simulating island dynamics, mound growth, and Ostwald ripening. Some ideas from the study of MBE are carried over to the study of front propagation in reaction-diffusion systems. Many of the analytic results about front propagation are derived from continuum models. It is unclear, however, that these results accurately describe the properties of a discrete system. It is reasonable to think that discrete systems will converge to the continuum results when sufficiently many particles are included. However, computational evidence of this is difficult to obtain, since the interesting properties tend to depend on a power law of the logarithm of the number of particles. Thus, the number of particles included in simulations must be exceedingly large. By