NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan
2018-03-01
We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Hancock, Laura M; Bruce, Jared M; Bruce, Amanda S; Lynch, Sharon G
2015-01-01
Between 40-65% of multiple sclerosis patients experience cognitive deficits, with processing speed and working memory most commonly affected. This pilot study investigated the effect of computerized cognitive training focused on improving processing speed and working memory. Participants were randomized into either an active or a sham training group and engaged in six weeks of training. The active training group improved on a measure of processing speed and attention following cognitive training, and data trended toward significance on measures of other domains. Results provide preliminary evidence that cognitive training with multiple sclerosis patients may produce moderate improvement in select areas of cognitive functioning.
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
Multiple filters affect tree species assembly in mid-latitude forest communities.
Kubota, Y; Kusumoto, B; Shiono, T; Ulrich, W
2018-05-01
Species assembly patterns of local communities are shaped by the balance between multiple abiotic/biotic filters and dispersal that both select individuals from species pools at the regional scale. Knowledge regarding functional assembly can provide insight into the relative importance of the deterministic and stochastic processes that shape species assembly. We evaluated the hierarchical roles of the α niche and β niches by analyzing the influence of environmental filtering relative to functional traits on geographical patterns of tree species assembly in mid-latitude forests. Using forest plot datasets, we examined the α niche traits (leaf and wood traits) and β niche properties (cold/drought tolerance) of tree species, and tested non-randomness (clustering/over-dispersion) of trait assembly based on null models that assumed two types of species pools related to biogeographical regions. For most plots, species assembly patterns fell within the range of random expectation. However, particularly for cold/drought tolerance-related β niche properties, deviation from randomness was frequently found; non-random clustering was predominant in higher latitudes with harsh climates. Our findings demonstrate that both randomness and non-randomness in trait assembly emerged as a result of the α and β niches, although we suggest the potential role of dispersal processes and/or species equalization through trait similarities in generating the prevalence of randomness. Clustering of β niche traits along latitudinal climatic gradients provides clear evidence of species sorting by filtering particular traits. Our results reveal that multiple filters through functional niches and stochastic processes jointly shape geographical patterns of species assembly across mid-latitude forests.
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
Multiplicative processes in visual cognition
NASA Astrophysics Data System (ADS)
Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.
2014-03-01
The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
A multiple imputation strategy for sequential multiple assignment randomized trials
Shortreed, Susan M.; Laber, Eric; Stroup, T. Scott; Pineau, Joelle; Murphy, Susan A.
2014-01-01
Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient’s health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient-specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges, and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness (CATIE), one of the most well-known SMARTs to date. PMID:24919867
Common Randomness Principles of Secrecy
ERIC Educational Resources Information Center
Tyagi, Himanshu
2013-01-01
This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…
Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, David O.
2007-01-01
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less
NASA Astrophysics Data System (ADS)
Rafiq Abuturab, Muhammad
2018-01-01
A new asymmetric multiple information cryptosystem based on chaotic spiral phase mask (CSPM) and random spectrum decomposition is put forwarded. In the proposed system, each channel of secret color image is first modulated with a CSPM and then gyrator transformed. The gyrator spectrum is randomly divided into two complex-valued masks. The same procedure is applied to multiple secret images to get their corresponding first and second complex-valued masks. Finally, first and second masks of each channel are independently added to produce first and second complex ciphertexts, respectively. The main feature of the proposed method is the different secret images encrypted by different CSPMs using different parameters as the sensitive decryption/private keys which are completely unknown to unauthorized users. Consequently, the proposed system would be resistant to potential attacks. Moreover, the CSPMs are easier to position in the decoding process owing to their own centering mark on axis focal ring. The retrieved secret images are free from cross-talk noise effects. The decryption process can be implemented by optical experiment. Numerical simulation results demonstrate the viability and security of the proposed method.
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory
NASA Astrophysics Data System (ADS)
Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick
2018-05-01
For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.
Joint Waveform Optimization and Adaptive Processing for Random-Phase Radar Signals
2014-01-01
extended targets,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 1, pp. 42– 55, June 2007. [2] S. Sen and A. Nehorai, “ OFDM mimo ...radar compared to traditional waveforms. I. INTRODUCTION There has been much recent interest in waveform design for multiple-input, multiple-output ( MIMO ...amplitude. When the resolution capability of the MIMO radar system is of interest, the transmit waveform can be designed to sharpen the radar ambiguity
Multiple-predators-based capture process on complex networks
NASA Astrophysics Data System (ADS)
Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi
2017-03-01
The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.
2012-08-01
It suggests that a smart use of some a-priori information about the operating environment, when processing the received signal and designing the...random variable with the same variance of the backscattering target amplitude αT , and D ( αT , α G T ) is the Kullback − Leibler divergence, see [65...MI . Proof. See Appendix 3.6.6. Thus, we can use the optimization procedure of Algorithm 4 to optimize the Mutual Information between the target
Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.
Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang
2017-01-01
Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
NASA Astrophysics Data System (ADS)
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Kosmidis, Mary H.; Zampakis, Petros; Malefaki, Sonia; Ntoskou, Katerina; Nousia, Anastasia; Bakirtzis, Christos; Papathanasopoulos, Panagiotis
2017-01-01
Cognitive impairment is frequently encountered in multiple sclerosis (MS) affecting between 40–65% of individuals, irrespective of disease duration and severity of physical disability. In the present multicenter randomized controlled trial, fifty-eight clinically stable RRMS patients with mild to moderate cognitive impairment and relatively low disability status were randomized to receive either computer-assisted (RehaCom) functional cognitive training with an emphasis on episodic memory, information processing speed/attention, and executive functions for 10 weeks (IG; n = 32) or standard clinical care (CG; n = 26). Outcome measures included a flexible comprehensive neuropsychological battery of tests sensitive to MS patient deficits and feedback regarding personal benefit gained from the intervention on four verbal questions. Only the IG group showed significant improvements in verbal and visuospatial episodic memory, processing speed/attention, and executive functioning from pre - to postassessment. Moreover, the improvement obtained on attention was retained over 6 months providing evidence on the long-term benefits of this intervention. Group by time interactions revealed significant improvements in composite cognitive domain scores in the IG relative to the demographically and clinically matched CG for verbal episodic memory, processing speed, verbal fluency, and attention. Treated patients rated the intervention positively and were more confident about their cognitive abilities following treatment. PMID:29463950
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Moore, Sarah J; Herst, Patries M; Louwe, Robert J W
2018-05-01
A remarkable improvement in patient positioning was observed after the implementation of various process changes aiming to increase the consistency of patient positioning throughout the radiotherapy treatment chain. However, no tool was available to describe these changes over time in a standardised way. This study reports on the feasibility of Statistical Process Control (SPC) to highlight changes in patient positioning accuracy and facilitate correlation of these changes with the underlying process changes. Metrics were designed to quantify the systematic and random patient deformation as input for the SPC charts. These metrics were based on data obtained from multiple local ROI matches for 191 patients who were treated for head-and-neck cancer during the period 2011-2016. SPC highlighted a significant improvement in patient positioning that coincided with multiple intentional process changes. The observed improvements could be described as a combination of a reduction in outliers and a systematic improvement in the patient positioning accuracy of all patients. SPC is able to track changes in the reproducibility of patient positioning in head-and-neck radiation oncology, and distinguish between systematic and random process changes. Identification of process changes underlying these trends requires additional statistical analysis and seems only possible when the changes do not overlap in time. Copyright © 2018 Elsevier B.V. All rights reserved.
Should multiple imputation be the method of choice for handling missing data in randomized trials?
Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J
2016-01-01
The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group. PMID:28034175
Should multiple imputation be the method of choice for handling missing data in randomized trials?
Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J
2016-01-01
The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group.
Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise
NASA Astrophysics Data System (ADS)
Morita, Satoru
2018-05-01
Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.
System and method for cognitive processing for data fusion
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor)
2012-01-01
A system and method for cognitive processing of sensor data. A processor array receiving analog sensor data and having programmable interconnects, multiplication weights, and filters provides for adaptive learning in real-time. A static random access memory contains the programmable data for the processor array and the stored data is modified to provide for adaptive learning.
Brookes, Sara T; Macefield, Rhiannon C; Williamson, Paula R; McNair, Angus G; Potter, Shelley; Blencowe, Natalie S; Strong, Sean; Blazeby, Jane M
2016-08-17
Methods for developing a core outcome or information set require involvement of key stakeholders to prioritise many items and achieve agreement as to the core set. The Delphi technique requires participants to rate the importance of items in sequential questionnaires (or rounds) with feedback provided in each subsequent round such that participants are able to consider the views of others. This study examines the impact of receiving feedback from different stakeholder groups, on the subsequent rating of items and the level of agreement between stakeholders. Randomized controlled trials were nested within the development of three core sets each including a Delphi process with two rounds of questionnaires, completed by patients and health professionals. Participants rated items from 1 (not essential) to 9 (absolutely essential). For round 2, participants were randomized to receive feedback from their peer stakeholder group only (peer) or both stakeholder groups separately (multiple). Decisions as to which items to retain following each round were determined by pre-specified criteria. Whilst type of feedback did not impact on the percentage of items for which a participant subsequently changed their rating, or the magnitude of change, it did impact on items retained at the end of round 2. Each core set contained discordant items retained by one feedback group but not the other (3-22 % discordant items). Consensus between patients and professionals in items to retain was greater amongst those receiving multiple group feedback in each core set (65-82 % agreement for peer-only feedback versus 74-94 % for multiple feedback). In addition, differences in round 2 scores were smaller between stakeholder groups receiving multiple feedback than between those receiving peer group feedback only. Variability in item scores across stakeholders was reduced following any feedback but this reduction was consistently greater amongst the multiple feedback group. In the development of a core outcome or information set, providing feedback within Delphi questionnaires from all stakeholder groups separately may influence the final core set and improve consensus between the groups. Further work is needed to better understand how participants rate and re-rate items within a Delphi process. The three randomized controlled trials reported here were each nested within the development of a core information or outcome set to investigate processes in core outcome and information set development. Outcomes were not health-related and therefore trial registration was not applicable.
Sugavanam, S; Yan, Z; Kamynin, V; Kurkov, A S; Zhang, L; Churkin, D V
2014-02-10
Multiwavelength lasing in the random distributed feedback fiber laser is demonstrated by employing an all fiber Lyot filter. Stable multiwavelength generation is obtained, with each line exhibiting sub-nanometer line-widths. A flat power distribution over multiple lines is obtained, which indicates that the power between lines is redistributed in nonlinear mixing processes. The multiwavelength generation is observed both in first and second Stokes waves.
Implications of clinical trial design on sample size requirements.
Leon, Andrew C
2008-07-01
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Quantum random bit generation using energy fluctuations in stimulated Raman scattering.
Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J
2013-12-02
Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
NASA Technical Reports Server (NTRS)
Manning, Robert M.
2004-01-01
The systems engineering description of a wideband communications channel is provided which is based upon the fundamental propagation aspects of the problem. In particular, the well known time variant description of a channel is formulated from the basic multiple scattering processes that occur in a random propagation medium. Such a connection is required if optimal processing methods are to be applied to mitigate the deleterious random fading and multipathing of the channel. An example is given which demonstrates how the effective bandwidth of the channel is diminished due to atmospheric propagation impairments.
Social Noise: Generating Random Numbers from Twitter Streams
NASA Astrophysics Data System (ADS)
Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús
2015-12-01
Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.
Rich or poor: Who should pay higher tax rates?
NASA Astrophysics Data System (ADS)
Murilo Castro de Oliveira, Paulo
2017-08-01
A dynamic agent model is introduced with an annual random wealth multiplicative process followed by taxes paid according to a linear wealth-dependent tax rate. If poor agents pay higher tax rates than rich agents, eventually all wealth becomes concentrated in the hands of a single agent. By contrast, if poor agents are subject to lower tax rates, the economic collective process continues forever.
Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph
2014-07-01
distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
NASA Astrophysics Data System (ADS)
Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele
2016-07-01
This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.
Cascade rainfall disaggregation application in U.S. Central Plains
USDA-ARS?s Scientific Manuscript database
Hourly rainfall are increasingly used in complex, process-based simulations of the environment. Long records of daily rainfall are common, but long continuous records of hourly rainfall are rare and must be developed. A Multiplicative Random Cascade (MRC) model is proposed to disaggregate observed d...
Packet communications in satellites with multiple-beam antennas and signal processing
NASA Technical Reports Server (NTRS)
Davies, R.; Chethik, F.; Penick, M.
1980-01-01
A communication satellite with a multiple-beam antenna and onboard signal processing is considered for use in a 'message-switched' data relay system. The signal processor may incorporate demodulation, routing, storage, and remodulation of the data. A system user model is established and key functional elements for the signal processing are identified. With the throughput and delay requirements as the controlled variables, the hardware complexity, operational discipline, occupied bandwidth, and overall user end-to-end cost are estimated for (1) random-access packet switching; and (2) reservation-access packet switching. Other aspects of this network (eg, the adaptability to channel switched traffic requirements) are examined. For the given requirements and constraints, the reservation system appears to be the most attractive protocol.
How did you guess? Or, what do multiple-choice questions measure?
Cox, K R
1976-06-05
Multiple-choice questions classified as requiring problem-solving skills have been interpreted as measuring problem-solving skills within students, with the implicit hypothesis that questions needing an increasingly complex intellectual process should present increasing difficulty to the student. This hypothesis was tested in a 150-question paper taken by 721 students in seven Australian medical schools. No correlation was observed between difficulty and assigned process. Consequently, the question-answering process was explored with a group of final-year students. Anecdotal recall by students gave heavy weight to knowledge rather than problem solving in answering these questions. Assignment of the 150 questions to the classification by three teachers and six students showed their congruence to be a little above random probability.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.
NASA Astrophysics Data System (ADS)
Stossel, Bryan Joseph
1995-01-01
Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.
2016-06-30
processed the data to reduce short-term variability and normalize diurnal variations , then provided these to a supervised random forest...complementary hypothesis concerning the pathogenesis of multiple organ dysfunction syndrome. Crit Care Med 24: 1107-1116. 61. Goldberger AL, Peng CK
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Data Processing for NASA's TDRSS DAMA Channel
NASA Technical Reports Server (NTRS)
Long, Christopher C.; Horan, Stephen
1996-01-01
A concept for the addition of a Demand Assignment Multiple Access (DAMA) service to NASA's current Space Network (SN) is developed. Specifically, the design of a receiver for the DAMA channel is outlined. Also, an outline of the procedures taken to process the received service request is presented. The modifications to the (SN) system are minimal. The post reception processing is accomplished using standard commercial off the shelf (COTS) packages. The result is a random access system capable of receiving requests for service.
2014-09-01
optimal diagonal loading which minimizes the MSE. The be- havior of optimal diagonal loading when the arrival process is composed of plane waves embedded...observation vectors. The examples of the ensemble correlation matrix corresponding to the input process consisting of a single or multiple plane waves...Y ∗ij is a complex-conjugate of Yij. This result is used in order to evaluate the expectations of different quadratic forms. The Poincare -Nash
A stochastic-geometric model of soil variation in Pleistocene patterned ground
NASA Astrophysics Data System (ADS)
Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc
2013-04-01
In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.
ARTS: automated randomization of multiple traits for study design.
Maienschein-Cline, Mark; Lei, Zhengdeng; Gardeux, Vincent; Abbasi, Taimur; Machado, Roberto F; Gordeuk, Victor; Desai, Ankit A; Saraf, Santosh; Bahroos, Neil; Lussier, Yves
2014-06-01
Collecting data from large studies on high-throughput platforms, such as microarray or next-generation sequencing, typically requires processing samples in batches. There are often systematic but unpredictable biases from batch-to-batch, so proper randomization of biologically relevant traits across batches is crucial for distinguishing true biological differences from experimental artifacts. When a large number of traits are biologically relevant, as is common for clinical studies of patients with varying sex, age, genotype and medical background, proper randomization can be extremely difficult to prepare by hand, especially because traits may affect biological inferences, such as differential expression, in a combinatorial manner. Here we present ARTS (automated randomization of multiple traits for study design), which aids researchers in study design by automatically optimizing batch assignment for any number of samples, any number of traits and any batch size. ARTS is implemented in Perl and is available at github.com/mmaiensc/ARTS. ARTS is also available in the Galaxy Tool Shed, and can be used at the Galaxy installation hosted by the UIC Center for Research Informatics (CRI) at galaxy.cri.uic.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.
Sosnoff, Jacob J; Moon, Yaejin; Wajda, Douglas A; Finlayson, Marcia L; McAuley, Edward; Peterson, Elizabeth W; Morrison, Steve; Motl, Robert W
2015-10-01
To determine the feasibility of three fall prevention programs delivered over 12 weeks among individuals with multiple sclerosis: (A) a home-based exercise program targeting physiological risk factors; (B) an educational program targeting behavioral risk factors; and (C) a combined exercise-and-education program targeting both factors. Randomized controlled trial. Home-based training with assessments at research laboratory. A total of 103 individuals inquired about the investigation. After screening, 37 individuals with multiple sclerosis who had fallen in the last year and ranged in age from 45-75 years volunteered for the investigation. A total of 34 participants completed postassessment following the 12-week intervention. Participants were randomly assigned into one of four conditions: (1) wait-list control (n = 9); (2) home-based exercise (n = 11); (3) education (n = 9); or (4) a combined exercise and education (n = 8) group. Before and after the 12-week interventions, participants underwent a fall risk assessment as determined by the physiological profile assessment and provided information on their fall prevention behaviors as indexed by the Falls Prevention Strategy Survey. Participants completed falls diaries during the three-months postintervention. A total of 34 participants completed postintervention testing. Procedures and processes were found to be feasible. Overall, fall risk scores were lower in the exercise groups (1.15 SD 1.31) compared with the non-exercise groups (2.04 SD 1.04) following the intervention (p < 0.01). There was no group difference in fall prevention behaviors (p > 0.05). Further examination of home-based exercise/education programs for reducing falls in individuals with multiple sclerosis is warranted. A total of 108 participants would be needed in a larger randomized controlled trial.ClinicalTrials.org #NCT01956227. © The Author(s) 2014.
Calculating with light using a chip-scale all-optical abacus.
Feldmann, J; Stegmaier, M; Gruhler, N; Ríos, C; Bhaskaran, H; Wright, C D; Pernice, W H P
2017-11-02
Machines that simultaneously process and store multistate data at one and the same location can provide a new class of fast, powerful and efficient general-purpose computers. We demonstrate the central element of an all-optical calculator, a photonic abacus, which provides multistate compute-and-store operation by integrating functional phase-change materials with nanophotonic chips. With picosecond optical pulses we perform the fundamental arithmetic operations of addition, subtraction, multiplication, and division, including a carryover into multiple cells. This basic processing unit is embedded into a scalable phase-change photonic network and addressed optically through a two-pulse random access scheme. Our framework provides first steps towards light-based non-von Neumann arithmetic.
Disassortativity of random critical branching trees
NASA Astrophysics Data System (ADS)
Kim, J. S.; Kahng, B.; Kim, D.
2009-06-01
Random critical branching trees (CBTs) are generated by the multiplicative branching process, where the branching number is determined stochastically, independent of the degree of their ancestor. Here we show analytically that despite this stochastic independence, there exists the degree-degree correlation (DDC) in the CBT and it is disassortative. Moreover, the skeletons of fractal networks, the maximum spanning trees formed by the edge betweenness centrality, behave similarly to the CBT in the DDC. This analytic solution and observation support the argument that the fractal scaling in complex networks originates from the disassortativity in the DDC.
Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers
NASA Astrophysics Data System (ADS)
Hu, Jun; Xu, Hebing; Li, Chao
2018-03-01
Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.
Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys
NASA Astrophysics Data System (ADS)
Han, Chao; Shen, Yuzhen; Ma, Wenlin
2017-12-01
An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.
Efficient search of multiple types of targets
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-12-01
Random searches often take place in fragmented landscapes. Also, in many instances like animal foraging, significant benefits to the searcher arise from visits to a large diversity of patches with a well-balanced distribution of targets found. Up to date, such aspects have been widely ignored in the usual single-objective analysis of search efficiency, in which one seeks to maximize just the number of targets found per distance traversed. Here we address the problem of determining the best strategies for the random search when these multiple-objective factors play a key role in the process. We consider a figure of merit (efficiency function), which properly "scores" the mentioned tasks. By considering random walk searchers with a power-law asymptotic Lévy distribution of step lengths, p (ℓ ) ˜ℓ-μ , with 1 <μ ≤3 , we show that the standard optimal strategy with μopt≈2 no longer holds universally. Instead, optimal searches with enhanced superdiffusivity emerge, including values as low as μopt≈1.3 (i.e., tending to the ballistic limit). For the general theory of random search optimization, our findings emphasize the necessity to correctly characterize the multitude of aims in any concrete metric to compare among possible candidates to efficient strategies. In the context of animal foraging, our results might explain some empirical data pointing to stronger superdiffusion (μ <2 ) in the search behavior of different animal species, conceivably associated to multiple goals to be achieved in fragmented landscapes.
Brief Daily Writing Activities and Performance on Major Multiple-Choice Exams
ERIC Educational Resources Information Center
Turner, Haley C.; Bliss, Stacy L.; Hautau, Briana; Carroll, Erin; Jaspers, Kathryn E.; Williams, Robert L.
2006-01-01
Although past research indicates that giving brief quizzes, administered either regularly or randomly, may lead to improvement in students' performance on major exams, negligible research has targeted daily writing activities that require the processing of course information at a deeper level than might result from simply reading course materials…
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Coherent backscattering of light by complex random media of spherical scatterers: numerical solution
NASA Astrophysics Data System (ADS)
Muinonen, Karri
2004-07-01
Novel Monte Carlo techniques are described for the computation of reflection coefficient matrices for multiple scattering of light in plane-parallel random media of spherical scatterers. The present multiple scattering theory is composed of coherent backscattering and radiative transfer. In the radiative transfer part, the Stokes parameters of light escaping from the medium are updated at each scattering process in predefined angles of emergence. The scattering directions at each process are randomized using probability densities for the polar and azimuthal scattering angles: the former angle is generated using the single-scattering phase function, whereafter the latter follows from Kepler's equation. For spherical scatterers in the Rayleigh regime, randomization proceeds semi-analytically whereas, beyond that regime, cubic spline presentation of the scattering matrix is used for numerical computations. In the coherent backscattering part, the reciprocity of electromagnetic waves in the backscattering direction allows the renormalization of the reversely propagating waves, whereafter the scattering characteristics are computed in other directions. High orders of scattering (~10 000) can be treated because of the peculiar polarization characteristics of the reverse wave: after a number of scatterings, the polarization state of the reverse wave becomes independent of that of the incident wave, that is, it becomes fully dictated by the scatterings at the end of the reverse path. The coherent backscattering part depends on the single-scattering albedo in a non-monotonous way, the most pronounced signatures showing up for absorbing scatterers. The numerical results compare favourably to the literature results for nonabsorbing spherical scatterers both in and beyond the Rayleigh regime.
Effect of multiplicative noise on stationary stochastic process
NASA Astrophysics Data System (ADS)
Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.
2018-03-01
An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.
Random intermittent search and the tug-of-war model of motor-driven transport
NASA Astrophysics Data System (ADS)
Newby, Jay; Bressloff, Paul C.
2010-04-01
We formulate the 'tug-of-war' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron's dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model).
Light scattering and random lasing in aqueous suspensions of hexagonal boron nitride nanoflakes
NASA Astrophysics Data System (ADS)
O'Brien, S. A.; Harvey, A.; Griffin, A.; Donnelly, T.; Mulcahy, D.; Coleman, J. N.; Donegan, J. F.; McCloskey, D.
2017-11-01
Liquid phase exfoliation allows large scale production of 2D materials in solution. The particles are highly anisotropic and strongly scatter light. While spherical particles can be accurately and precisely described by a single parameter—the radius, 2D nanoflakes, however, cannot be so easily described. We investigate light scattering in aqueous solutions of 2D hexagonal boron nitride nanoflakes in the single and multiple scattering regimes. In the single scattering regime, the anisotropic 2D materials show a much stronger depolarization of light when compared to spherical particles of similar size. In the multiple scattering regime, the scattering as a function of optical path for hexagonal boron nitride nanoflakes of a given lateral length was found to be qualitatively equivalent to scattering from spheres with the same diameter. We also report the presence of random lasing in high concentration suspensions of aqueous h-BN mixed with Rhodamine B dye. The h-BN works as a scattering agent and Rhodamine B as a gain medium for the process. We observed random lasing at 587 nm with a threshold energy of 0.8 mJ.
Light scattering and random lasing in aqueous suspensions of hexagonal boron nitride nanoflakes.
O'Brien, S A; Harvey, A; Griffin, A; Donnelly, T; Mulcahy, D; Coleman, J N; Donegan, J F; McCloskey, D
2017-11-24
Liquid phase exfoliation allows large scale production of 2D materials in solution. The particles are highly anisotropic and strongly scatter light. While spherical particles can be accurately and precisely described by a single parameter-the radius, 2D nanoflakes, however, cannot be so easily described. We investigate light scattering in aqueous solutions of 2D hexagonal boron nitride nanoflakes in the single and multiple scattering regimes. In the single scattering regime, the anisotropic 2D materials show a much stronger depolarization of light when compared to spherical particles of similar size. In the multiple scattering regime, the scattering as a function of optical path for hexagonal boron nitride nanoflakes of a given lateral length was found to be qualitatively equivalent to scattering from spheres with the same diameter. We also report the presence of random lasing in high concentration suspensions of aqueous h-BN mixed with Rhodamine B dye. The h-BN works as a scattering agent and Rhodamine B as a gain medium for the process. We observed random lasing at 587 nm with a threshold energy of 0.8 mJ.
On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model
NASA Astrophysics Data System (ADS)
Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin
2018-01-01
We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.
Iterative dip-steering median filter
NASA Astrophysics Data System (ADS)
Huo, Shoudong; Zhu, Weihong; Shi, Taikun
2017-09-01
Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.
Garvin-Doxas, Kathy
2008-01-01
While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500 open-ended (primarily) college student responses, submitted online and analyzed through our Ed's Tools system, together with 28 thematic and think-aloud interviews with students, and the responses of students in introductory and advanced courses to questions on the BCI. Students believe that random processes are inefficient, whereas biological systems are very efficient. They are therefore quick to propose their own rational explanations for various processes, from diffusion to evolution. These rational explanations almost always make recourse to a driver, e.g., natural selection in evolution or concentration gradients in molecular biology, with the process taking place only when the driver is present, and ceasing when the driver is absent. For example, most students believe that diffusion only takes place when there is a concentration gradient, and that the mutational processes that change organisms occur only in response to natural selection pressures. An understanding that random processes take place all the time and can give rise to complex and often counterintuitive behaviors is almost totally absent. Even students who have had advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes, and passing through multiple conventional biology courses appears to have little effect on their underlying beliefs. PMID:18519614
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.
Unconscious Priming According to Multiple S-R Rules
ERIC Educational Resources Information Center
Kiesel, Andrea; Kunde, Wilfried; Hoffmann, Joachim
2007-01-01
The present study investigated if unconscious primes can be processed according to different stimulus-response (S-R) rules simultaneously. Participants performed two different S-R rules, such as judging a digit as smaller or larger than five and judging a letter as vowel or consonant. These S-R rules were administered in random order and announced…
Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample
ERIC Educational Resources Information Center
Lehrer, Richard
2017-01-01
Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…
Cooperation evolution in random multiplicative environments
NASA Astrophysics Data System (ADS)
Yaari, G.; Solomon, S.
2010-02-01
Most real life systems have a random component: the multitude of endogenous and exogenous factors influencing them result in stochastic fluctuations of the parameters determining their dynamics. These empirical systems are in many cases subject to noise of multiplicative nature. The special properties of multiplicative noise as opposed to additive noise have been noticed for a long while. Even though apparently and formally the difference between free additive vs. multiplicative random walks consists in just a move from normal to log-normal distributions, in practice the implications are much more far reaching. While in an additive context the emergence and survival of cooperation requires special conditions (especially some level of reward, punishment, reciprocity), we find that in the multiplicative random context the emergence of cooperation is much more natural and effective. We study the various implications of this observation and its applications in various contexts.
Resolvent-Techniques for Multiple Exercise Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no
2015-02-15
We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
Self-organization of maze-like structures via guided wrinkling.
Bae, Hyung Jong; Bae, Sangwook; Yoon, Jinsik; Park, Cheolheon; Kim, Kibeom; Kwon, Sunghoon; Park, Wook
2017-06-01
Sophisticated three-dimensional (3D) structures found in nature are self-organized by bottom-up natural processes. To artificially construct these complex systems, various bottom-up fabrication methods, designed to transform 2D structures into 3D structures, have been developed as alternatives to conventional top-down lithography processes. We present a different self-organization approach, where we construct microstructures with periodic and ordered, but with random architecture, like mazes. For this purpose, we transformed planar surfaces using wrinkling to directly use randomly generated ridges as maze walls. Highly regular maze structures, consisting of several tessellations with customized designs, were fabricated by precisely controlling wrinkling with the ridge-guiding structure, analogous to the creases in origami. The method presented here could have widespread applications in various material systems with multiple length scales.
Knowledge translation interventions for critically ill patients: a systematic review*.
Sinuff, Tasnim; Muscedere, John; Adhikari, Neill K J; Stelfox, Henry T; Dodek, Peter; Heyland, Daren K; Rubenfeld, Gordon D; Cook, Deborah J; Pinto, Ruxandra; Manoharan, Venika; Currie, Jan; Cahill, Naomi; Friedrich, Jan O; Amaral, Andre; Piquette, Dominique; Scales, Damon C; Dhanani, Sonny; Garland, Allan
2013-11-01
We systematically reviewed ICU-based knowledge translation studies to assess the impact of knowledge translation interventions on processes and outcomes of care. We searched electronic databases (to July, 2010) without language restrictions and hand-searched reference lists of relevant studies and reviews. Two reviewers independently identified randomized controlled trials and observational studies comparing any ICU-based knowledge translation intervention (e.g., protocols, guidelines, and audit and feedback) to management without a knowledge translation intervention. We focused on clinical topics that were addressed in greater than or equal to five studies. Pairs of reviewers abstracted data on the clinical topic, knowledge translation intervention(s), process of care measures, and patient outcomes. For each individual or combination of knowledge translation intervention(s) addressed in greater than or equal to three studies, we summarized each study using median risk ratio for dichotomous and standardized mean difference for continuous process measures. We used random-effects models. Anticipating a small number of randomized controlled trials, our primary meta-analyses included randomized controlled trials and observational studies. In separate sensitivity analyses, we excluded randomized controlled trials and collapsed protocols, guidelines, and bundles into one category of intervention. We conducted meta-analyses for clinical outcomes (ICU and hospital mortality, ventilator-associated pneumonia, duration of mechanical ventilation, and ICU length of stay) related to interventions that were associated with improvements in processes of care. From 11,742 publications, we included 119 investigations (seven randomized controlled trials, 112 observational studies) on nine clinical topics. Interventions that included protocols with or without education improved continuous process measures (seven observational studies and one randomized controlled trial; standardized mean difference [95% CI]: 0.26 [0.1, 0.42]; p = 0.001 and four observational studies and one randomized controlled trial; 0.83 [0.37, 1.29]; p = 0.0004, respectively). Heterogeneity among studies within topics ranged from low to extreme. The exclusion of randomized controlled trials did not change our results. Single-intervention and lower-quality studies had higher standardized mean differences compared to multiple-intervention and higher-quality studies (p = 0.013 and 0.016, respectively). There were no associated improvements in clinical outcomes. Knowledge translation interventions in the ICU that include protocols with or without education are associated with the greatest improvements in processes of critical care.
2017-01-01
Localization of the wireless sensor network is a vital area acquiring an impressive research concern and called upon to expand more with the rising of its applications. As localization is gaining prominence in wireless sensor network, it is vulnerable to jamming attacks. Jamming attacks disrupt communication opportunity among the sender and receiver and deeply impact the localization process, leading to a huge error of the estimated sensor node position. Therefore, detection and elimination of jamming influence are absolutely indispensable. Range-based techniques especially Received Signal Strength (RSS) is facing severe impact of these attacks. This paper proposes algorithms based on Combination Multiple Frequency Multiple Power Localization (C-MFMPL) and Step Function Multiple Frequency Multiple Power Localization (SF-MFMPL). The algorithms have been tested in the presence of multiple types of jamming attacks including capture and replay, random and constant jammers over a log normal shadow fading propagation model. In order to overcome the impact of random and constant jammers, the proposed method uses two sets of frequencies shared by the implemented anchor nodes to obtain the averaged RSS readings all over the transmitted frequencies successfully. In addition, three stages of filters have been used to cope with the replayed beacons caused by the capture and replay jammers. In this paper the localization performance of the proposed algorithms for the ideal case which is defined by without the existence of the jamming attack are compared with the case of jamming attacks. The main contribution of this paper is to achieve robust localization performance in the presence of multiple jamming attacks under log normal shadow fading environment with a different simulation conditions and scenarios. PMID:28493977
Controllable lasing performance in solution-processed organic-inorganic hybrid perovskites.
Kao, Tsung Sheng; Chou, Yu-Hsun; Hong, Kuo-Bin; Huang, Jiong-Fu; Chou, Chun-Hsien; Kuo, Hao-Chung; Chen, Fang-Chung; Lu, Tien-Chang
2016-11-03
Solution-processed organic-inorganic perovskites are fascinating due to their remarkable photo-conversion efficiency and great potential in the cost-effective, versatile and large-scale manufacturing of optoelectronic devices. In this paper, we demonstrate that the perovskite nanocrystal sizes can be simply controlled by manipulating the precursor solution concentrations in a two-step sequential deposition process, thus achieving the feasible tunability of excitonic properties and lasing performance in hybrid metal-halide perovskites. The lasing threshold is at around 230 μJ cm -2 in this solution-processed organic-inorganic lead-halide material, which is comparable to the colloidal quantum dot lasers. The efficient stimulated emission originates from the multiple random scattering provided by the micro-meter scale rugged morphology and polycrystalline grain boundaries. Thus the excitonic properties in perovskites exhibit high correlation with the formed morphology of the perovskite nanocrystals. Compared to the conventional lasers normally serving as a coherent light source, the perovskite random lasers are promising in making low-cost thin-film lasing devices for flexible and speckle-free imaging applications.
ERIC Educational Resources Information Center
What Works Clearinghouse, 2015
2015-01-01
This study measured the impact of the "Fitness Improves Thinking in Kids" ("FITKids") afterschool program on the executive control (i.e., maintaining focus, performing multiple cognitive processes) and physical fitness of preadolescent students. The "FITKids" program was held at the University of Illinois' campus and…
Multiple hypothesis tracking for cluttered biological image sequences.
Chenouard, Nicolas; Bloch, Isabelle; Olivo-Marin, Jean-Christophe
2013-11-01
In this paper, we present a method for simultaneously tracking thousands of targets in biological image sequences, which is of major importance in modern biology. The complexity and inherent randomness of the problem lead us to propose a unified probabilistic framework for tracking biological particles in microscope images. The framework includes realistic models of particle motion and existence and of fluorescence image features. For the track extraction process per se, the very cluttered conditions motivate the adoption of a multiframe approach that enforces tracking decision robustness to poor imaging conditions and to random target movements. We tackle the large-scale nature of the problem by adapting the multiple hypothesis tracking algorithm to the proposed framework, resulting in a method with a favorable tradeoff between the model complexity and the computational cost of the tracking procedure. When compared to the state-of-the-art tracking techniques for bioimaging, the proposed algorithm is shown to be the only method providing high-quality results despite the critically poor imaging conditions and the dense target presence. We thus demonstrate the benefits of advanced Bayesian tracking techniques for the accurate computational modeling of dynamical biological processes, which is promising for further developments in this domain.
Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevrekidis, Ioannis
2017-03-22
The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating themore » scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).« less
NASA Astrophysics Data System (ADS)
Duarte Queirós, Sílvio M.
2012-07-01
We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Using circuit theory to model connectivity in ecology, evolution, and conservation.
McRae, Brad H; Dickson, Brett G; Keitt, Timothy H; Shah, Viral B
2008-10-01
Connectivity among populations and habitats is important for a wide range of ecological processes. Understanding, preserving, and restoring connectivity in complex landscapes requires connectivity models and metrics that are reliable, efficient, and process based. We introduce a new class of ecological connectivity models based in electrical circuit theory. Although they have been applied in other disciplines, circuit-theoretic connectivity models are new to ecology. They offer distinct advantages over common analytic connectivity models, including a theoretical basis in random walk theory and an ability to evaluate contributions of multiple dispersal pathways. Resistance, current, and voltage calculated across graphs or raster grids can be related to ecological processes (such as individual movement and gene flow) that occur across large population networks or landscapes. Efficient algorithms can quickly solve networks with millions of nodes, or landscapes with millions of raster cells. Here we review basic circuit theory, discuss relationships between circuit and random walk theories, and describe applications in ecology, evolution, and conservation. We provide examples of how circuit models can be used to predict movement patterns and fates of random walkers in complex landscapes and to identify important habitat patches and movement corridors for conservation planning.
An improved sampling method of complex network
NASA Astrophysics Data System (ADS)
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
Novel method of extracting motion from natural movies.
Suzuki, Wataru; Ichinohe, Noritaka; Tani, Toshiki; Hayami, Taku; Miyakawa, Naohisa; Watanabe, Satoshi; Takeichi, Hiroshige
2017-11-01
The visual system in primates can be segregated into motion and shape pathways. Interaction occurs at multiple stages along these pathways. Processing of shape-from-motion and biological motion is considered to be a higher-order integration process involving motion and shape information. However, relatively limited types of stimuli have been used in previous studies on these integration processes. We propose a new algorithm to extract object motion information from natural movies and to move random dots in accordance with the information. The object motion information is extracted by estimating the dynamics of local normal vectors of the image intensity projected onto the x-y plane of the movie. An electrophysiological experiment on two adult common marmoset monkeys (Callithrix jacchus) showed that the natural and random dot movies generated with this new algorithm yielded comparable neural responses in the middle temporal visual area. In principle, this algorithm provided random dot motion stimuli containing shape information for arbitrary natural movies. This new method is expected to expand the neurophysiological and psychophysical experimental protocols to elucidate the integration processing of motion and shape information in biological systems. The novel algorithm proposed here was effective in extracting object motion information from natural movies and provided new motion stimuli to investigate higher-order motion information processing. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Rimmer, James H; Johnson, George; Wilroy, Jereme; Young, Hui-Ju; Mehta, Tapan; Lai, Byron
2018-01-01
Background People with multiple sclerosis face varying levels of disability and symptoms, thus requiring highly trained therapists and/or exercise trainers to design personalized exercise programs. However, for people living in geographically isolated communities, access to such trained professionals can be challenging due to a number of barriers associated with cost, access to transportation, and travel distance. Generic mobile health exercise apps often fall short of what people with multiple sclerosis need to become physically active (ie, exercise content that has been adapted to accommodate a wide range of functional limitations). Objective This usability study describes the development process of the TEAMS (Tele-Exercise and Multiple Sclerosis) app, which is being used by people with multiple sclerosis in a large randomized controlled trial to engage in home-based telerehabilitation. Methods Twenty-one participants with disabilities (10 people with multiple sclerosis) were involved in the double iterative design, which included the simultaneous development of the app features and exercise content (exercise videos and articles). Framed within a user-centered design approach, the development process included 2 stages: ground-level creation (focus group followed by early stage evaluations and developments), and proof of concept through 2 usability tests. Usability (effectiveness, usefulness, and satisfaction) was evaluated using a mixed-methods approach. Results During testing of the app’s effectiveness, the second usability test resulted in an average of 1 problem per participant, a decrease of 53% compared to the initial usability test. Five themes were constructed from the qualitative data that related to app usefulness and satisfaction, namely: high perceived confidence for app usability, positive perceptions of exercise videos, viable exercise option at home, orientation and familiarity required for successful participation, and app issues. Participants acknowledged that the final app was ready to be delivered to the public after minor revisions. After including these revisions, the project team released the final app that is being used in the randomized controlled trial. Conclusions A multi-level user-centered development process resulted in the development of an inclusive exercise program for people with multiple sclerosis operated through an easy-to-use app. The promotion of exercise through self-regulated mHealth programs requires a stakeholder-driven approach to app development. This ensures that app and content match the preferences and functional abilities of the end user (ie, people with varying levels of multiple sclerosis). PMID:29798832
Thirumalai, Mohanraj; Rimmer, James H; Johnson, George; Wilroy, Jereme; Young, Hui-Ju; Mehta, Tapan; Lai, Byron
2018-05-24
People with multiple sclerosis face varying levels of disability and symptoms, thus requiring highly trained therapists and/or exercise trainers to design personalized exercise programs. However, for people living in geographically isolated communities, access to such trained professionals can be challenging due to a number of barriers associated with cost, access to transportation, and travel distance. Generic mobile health exercise apps often fall short of what people with multiple sclerosis need to become physically active (ie, exercise content that has been adapted to accommodate a wide range of functional limitations). This usability study describes the development process of the TEAMS (Tele-Exercise and Multiple Sclerosis) app, which is being used by people with multiple sclerosis in a large randomized controlled trial to engage in home-based telerehabilitation. Twenty-one participants with disabilities (10 people with multiple sclerosis) were involved in the double iterative design, which included the simultaneous development of the app features and exercise content (exercise videos and articles). Framed within a user-centered design approach, the development process included 2 stages: ground-level creation (focus group followed by early stage evaluations and developments), and proof of concept through 2 usability tests. Usability (effectiveness, usefulness, and satisfaction) was evaluated using a mixed-methods approach. During testing of the app's effectiveness, the second usability test resulted in an average of 1 problem per participant, a decrease of 53% compared to the initial usability test. Five themes were constructed from the qualitative data that related to app usefulness and satisfaction, namely: high perceived confidence for app usability, positive perceptions of exercise videos, viable exercise option at home, orientation and familiarity required for successful participation, and app issues. Participants acknowledged that the final app was ready to be delivered to the public after minor revisions. After including these revisions, the project team released the final app that is being used in the randomized controlled trial. A multi-level user-centered development process resulted in the development of an inclusive exercise program for people with multiple sclerosis operated through an easy-to-use app. The promotion of exercise through self-regulated mHealth programs requires a stakeholder-driven approach to app development. This ensures that app and content match the preferences and functional abilities of the end user (ie, people with varying levels of multiple sclerosis). ©Mohanraj Thirumalai, James H Rimmer, George Johnson, Jereme Wilroy, Hui-Ju Young, Tapan Mehta, Byron Lai. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 24.05.2018.
Zhu, Ye; Sun, Jianhua; Yi, Chenglin; Wei, Wei; Liu, Xiaoya
2016-09-13
In this study, a one-step generation of stable multiple Pickering emulsions using pH-responsive polymeric nanoparticles as the only emulsifier was reported. The polymeric nanoparticles were self-assembled from an amphiphilic random copolymer poly(dodecyl acrylate-co-acrylic acid) (PDAA), and the effect of the copolymer content on the size and morphology of PDAA nanoparticles was determined by dynamic light scattering (DLS) and transmission electron microscopy (TEM). The emulsification study of PDAA nanoparticles revealed that multiple Pickering emulsions could be generated through a one-step phase inversion process by using PDAA nanoparticles as the stabilizer. Moreover, the emulsification performance of PDAA nanoparticles at different pH values demonstrated that multiple emulsions with long-time stability could only be stabilized by PDAA nanoparticles at pH 5.5, indicating that the surface wettability of PDAA nanoparticles plays a crucial role in determining the type and stability of the prepared Pickering emulsions. Additionally, the polarity of oil does not affect the emulsification performance of PDAA nanoparticles, and a wide range of oils could be used as the oil phase to prepare multiple emulsions. These results demonstrated that multiple Pickering emulsions could be generated via the one-step emulsification process using self-assembled polymeric nanoparticles as the stabilizer, and the prepared multiple emulsions have promising potential to be applied in the cosmetic, medical, and food industries.
Gich, Jordi; Freixanet, Jordi; García, Rafael; Vilanova, Joan Carles; Genís, David; Silva, Yolanda; Montalban, Xavier; Ramió-Torrentà, Lluís
2015-09-01
MS-Line! was created to provide an effective treatment for cognitive impairment in multiple sclerosis (MS) patients. To assess the efficacy of MS-Line!. A randomized, controlled, single-blind, 6-month pilot study. Patients were randomly assigned to an experimental group (cognitive rehabilitation with the programme) or to a control group (no cognitive rehabilitation). Randomization was stratified by cognitive impairment level. Cognitive assessment included: selective reminding test, 10/36 spatial recall test (10/36 SPART), symbol digit modalities test, paced auditory serial addition test, word list generation (WLG), FAS test, subtests of WAIS-III, Boston naming test (BNT), and trail making test (TMT). Forty-three patients (22 in the experimental group, 21 in the control group) were analyzed. Covariance analysis showed significant differences in 10/36 SPART (P=0.0002), 10/36 SPART delayed recall (P=0.0021), WLG (P=0.0123), LNS (P=0.0413), BNT (P=0.0007) and TMT-A (P=0.010) scores between groups. The study showed a significant improvement related to learning and visual memory, executive functions, attention and information processing speed, and naming ability in those patients who received cognitive rehabilitation. The results suggest that MS-Line! is effective in improving cognitive impairment in MS patients. © The Author(s), 2015.
Trial of Minocycline in a Clinically Isolated Syndrome of Multiple Sclerosis.
Metz, Luanne M; Li, David K B; Traboulsee, Anthony L; Duquette, Pierre; Eliasziw, Misha; Cerchiaro, Graziela; Greenfield, Jamie; Riddehough, Andrew; Yeung, Michael; Kremenchutzky, Marcelo; Vorobeychik, Galina; Freedman, Mark S; Bhan, Virender; Blevins, Gregg; Marriott, James J; Grand'Maison, Francois; Lee, Liesly; Thibault, Manon; Hill, Michael D; Yong, V Wee
2017-06-01
On the basis of encouraging preliminary results, we conducted a randomized, controlled trial to determine whether minocycline reduces the risk of conversion from a first demyelinating event (also known as a clinically isolated syndrome) to multiple sclerosis. During the period from January 2009 through July 2013, we randomly assigned participants who had had their first demyelinating symptoms within the previous 180 days to receive either 100 mg of minocycline, administered orally twice daily, or placebo. Administration of minocycline or placebo was continued until a diagnosis of multiple sclerosis was established or until 24 months after randomization, whichever came first. The primary outcome was conversion to multiple sclerosis (diagnosed on the basis of the 2005 McDonald criteria) within 6 months after randomization. Secondary outcomes included conversion to multiple sclerosis within 24 months after randomization and changes on magnetic resonance imaging (MRI) at 6 months and 24 months (change in lesion volume on T 2 -weighted MRI, cumulative number of new lesions enhanced on T 1 -weighted MRI ["enhancing lesions"], and cumulative combined number of unique lesions [new enhancing lesions on T 1 -weighted MRI plus new and newly enlarged lesions on T 2 -weighted MRI]). A total of 142 eligible participants underwent randomization at 12 Canadian multiple sclerosis clinics; 72 participants were assigned to the minocycline group and 70 to the placebo group. The mean age of the participants was 35.8 years, and 68.3% were women. The unadjusted risk of conversion to multiple sclerosis within 6 months after randomization was 61.0% in the placebo group and 33.4% in the minocycline group, a difference of 27.6 percentage points (95% confidence interval [CI], 11.4 to 43.9; P=0.001). After adjustment for the number of enhancing lesions at baseline, the difference in the risk of conversion to multiple sclerosis within 6 months after randomization was 18.5 percentage points (95% CI, 3.7 to 33.3; P=0.01); the unadjusted risk difference was not significant at the 24-month secondary outcome time point (P=0.06). All secondary MRI outcomes favored minocycline over placebo at 6 months but not at 24 months. Trial withdrawals and adverse events of rash, dizziness, and dental discoloration were more frequent among participants who received minocycline than among those who received placebo. The risk of conversion from a clinically isolated syndrome to multiple sclerosis was significantly lower with minocycline than with placebo over 6 months but not over 24 months. (Funded by the Multiple Sclerosis Society of Canada; ClinicalTrials.gov number, NCT00666887 .).
Kordes, Sebastian; Kössl, Manfred
2017-01-01
Abstract For the purpose of orientation, echolocating bats emit highly repetitive and spatially directed sonar calls. Echoes arising from call reflections are used to create an acoustic image of the environment. The inferior colliculus (IC) represents an important auditory stage for initial processing of echolocation signals. The present study addresses the following questions: (1) how does the temporal context of an echolocation sequence mimicking an approach flight of an animal affect neuronal processing of distance information to echo delays? (2) how does the IC process complex echolocation sequences containing echo information from multiple objects (multiobject sequence)? Here, we conducted neurophysiological recordings from the IC of ketamine-anaesthetized bats of the species Carollia perspicillata and compared the results from the IC with the ones from the auditory cortex (AC). Neuronal responses to an echolocation sequence was suppressed when compared to the responses to temporally isolated and randomized segments of the sequence. The neuronal suppression was weaker in the IC than in the AC. In contrast to the cortex, the time course of the acoustic events is reflected by IC activity. In the IC, suppression sharpens the neuronal tuning to specific call-echo elements and increases the signal-to-noise ratio in the units’ responses. When presenting multiple-object sequences, despite collicular suppression, the neurons responded to each object-specific echo. The latter allows parallel processing of multiple echolocation streams at the IC level. Altogether, our data suggests that temporally-precise neuronal responses in the IC could allow fast and parallel processing of multiple acoustic streams. PMID:29242823
Beetz, M Jerome; Kordes, Sebastian; García-Rosales, Francisco; Kössl, Manfred; Hechavarría, Julio C
2017-01-01
For the purpose of orientation, echolocating bats emit highly repetitive and spatially directed sonar calls. Echoes arising from call reflections are used to create an acoustic image of the environment. The inferior colliculus (IC) represents an important auditory stage for initial processing of echolocation signals. The present study addresses the following questions: (1) how does the temporal context of an echolocation sequence mimicking an approach flight of an animal affect neuronal processing of distance information to echo delays? (2) how does the IC process complex echolocation sequences containing echo information from multiple objects (multiobject sequence)? Here, we conducted neurophysiological recordings from the IC of ketamine-anaesthetized bats of the species Carollia perspicillata and compared the results from the IC with the ones from the auditory cortex (AC). Neuronal responses to an echolocation sequence was suppressed when compared to the responses to temporally isolated and randomized segments of the sequence. The neuronal suppression was weaker in the IC than in the AC. In contrast to the cortex, the time course of the acoustic events is reflected by IC activity. In the IC, suppression sharpens the neuronal tuning to specific call-echo elements and increases the signal-to-noise ratio in the units' responses. When presenting multiple-object sequences, despite collicular suppression, the neurons responded to each object-specific echo. The latter allows parallel processing of multiple echolocation streams at the IC level. Altogether, our data suggests that temporally-precise neuronal responses in the IC could allow fast and parallel processing of multiple acoustic streams.
Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Kim, Kap-Joong; Kim, Jong-Hoi; Youn, Chun Ju
2017-08-21
Most polarization-based BB84 quantum key distribution (QKD) systems utilize multiple lasers to generate one of four polarization quantum states randomly. However, random bit generation with multiple lasers can potentially open critical side channels that significantly endangers the security of QKD systems. In this paper, we show unnoticed side channels of temporal disparity and intensity fluctuation, which possibly exist in the operation of multiple semiconductor laser diodes. Experimental results show that the side channels can enormously degrade security performance of QKD systems. An important system issue for the improvement of quantum bit error rate (QBER) related with laser driving condition is further addressed with experimental results.
Fiber tractography using machine learning.
Neher, Peter F; Côté, Marc-Alexandre; Houde, Jean-Christophe; Descoteaux, Maxime; Maier-Hein, Klaus H
2017-09-01
We present a fiber tractography approach based on a random forest classification and voting process, guiding each step of the streamline progression by directly processing raw diffusion-weighted signal intensities. For comparison to the state-of-the-art, i.e. tractography pipelines that rely on mathematical modeling, we performed a quantitative and qualitative evaluation with multiple phantom and in vivo experiments, including a comparison to the 96 submissions of the ISMRM tractography challenge 2015. The results demonstrate the vast potential of machine learning for fiber tractography. Copyright © 2017 Elsevier Inc. All rights reserved.
Long-range epidemic spreading in a random environment.
Juhász, Róbert; Kovács, István A; Iglói, Ferenc
2015-03-01
Modeling long-range epidemic spreading in a random environment, we consider a quenched, disordered, d-dimensional contact process with infection rates decaying with distance as 1/rd+σ. We study the dynamical behavior of the model at and below the epidemic threshold by a variant of the strong-disorder renormalization-group method and by Monte Carlo simulations in one and two spatial dimensions. Starting from a single infected site, the average survival probability is found to decay as P(t)∼t-d/z up to multiplicative logarithmic corrections. Below the epidemic threshold, a Griffiths phase emerges, where the dynamical exponent z varies continuously with the control parameter and tends to zc=d+σ as the threshold is approached. At the threshold, the spatial extension of the infected cluster (in surviving trials) is found to grow as R(t)∼t1/zc with a multiplicative logarithmic correction and the average number of infected sites in surviving trials is found to increase as Ns(t)∼(lnt)χ with χ=2 in one dimension.
Non-stationary least-squares complex decomposition for microseismic noise attenuation
NASA Astrophysics Data System (ADS)
Chen, Yangkang
2018-06-01
Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing process. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes its detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Naim, Eli; Krapivsky, Paul
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Single-shot secure quantum network coding on butterfly network with free public communication
NASA Astrophysics Data System (ADS)
Owari, Masaki; Kato, Go; Hayashi, Masahito
2018-01-01
Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.
ERIC Educational Resources Information Center
Kolovelonis, Athanasios; Goudas, Marios; Dermitzaki, Irini
2011-01-01
This study examined the effect of different goals (process, performance outcome, and multiple goals) and self-recording on self-regulation of learning a dart-throwing skill. Participants were 105 fifth and sixth graders who were randomly assigned to six (3 Goal type x 2 self-recording) experimental and one control group. Results showed a positive…
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.
Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen
2018-01-19
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Extreme values and fat tails of multifractal fluctuations
NASA Astrophysics Data System (ADS)
Muzy, J. F.; Bacry, E.; Kozhemyak, A.
2006-06-01
In this paper we discuss the problem of the estimation of extreme event occurrence probability for data drawn from some multifractal process. We also study the heavy (power-law) tail behavior of probability density function associated with such data. We show that because of strong correlations, the standard extreme value approach is not valid and classical tail exponent estimators should be interpreted cautiously. Extreme statistics associated with multifractal random processes turn out to be characterized by non-self-averaging properties. Our considerations rely upon some analogy between random multiplicative cascades and the physics of disordered systems and also on recent mathematical results about the so-called multifractal formalism. Applied to financial time series, our findings allow us to propose an unified framework that accounts for the observed multiscaling properties of return fluctuations, the volatility clustering phenomenon and the observed “inverse cubic law” of the return pdf tails.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-01-01
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765
Spatio-temporal Hotelling observer for signal detection from image sequences
Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.
2010-01-01
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494
Spatio-temporal Hotelling observer for signal detection from image sequences.
Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J
2009-06-22
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Kuang, Zheng; Ji, Zhicheng
2018-01-01
Abstract Biological processes are usually associated with genome-wide remodeling of transcription driven by transcription factors (TFs). Identifying key TFs and their spatiotemporal binding patterns are indispensable to understanding how dynamic processes are programmed. However, most methods are designed to predict TF binding sites only. We present a computational method, dynamic motif occupancy analysis (DynaMO), to infer important TFs and their spatiotemporal binding activities in dynamic biological processes using chromatin profiling data from multiple biological conditions such as time-course histone modification ChIP-seq data. In the first step, DynaMO predicts TF binding sites with a random forests approach. Next and uniquely, DynaMO infers dynamic TF binding activities at predicted binding sites using their local chromatin profiles from multiple biological conditions. Another landmark of DynaMO is to identify key TFs in a dynamic process using a clustering and enrichment analysis of dynamic TF binding patterns. Application of DynaMO to the yeast ultradian cycle, mouse circadian clock and human neural differentiation exhibits its accuracy and versatility. We anticipate DynaMO will be generally useful for elucidating transcriptional programs in dynamic processes. PMID:29325176
CR-Calculus and adaptive array theory applied to MIMO random vibration control tests
NASA Astrophysics Data System (ADS)
Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.
2016-09-01
Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.
NASA Astrophysics Data System (ADS)
Marshall, Jonathan A.
1992-12-01
A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method
Pu, Juan; Komvopoulos, Kyriakos
2014-06-01
Bilayer fibrous membranes of poly(l-lactic acid) (PLLA) were fabricated by electrospinning, using a parallel-disk mandrel configuration that resulted in the sequential deposition of a layer with fibers aligned across the two parallel disks and a layer with randomly oriented fibers, both layers deposited in a single process step. Membrane structure and fiber alignment were characterized by scanning electron microscopy and two-dimensional fast Fourier transform. Because of the intricacies of the generated electric field, bilayer membranes exhibited higher porosity than single-layer membranes consisting of randomly oriented fibers fabricated with a solid-drum collector. However, despite their higher porosity, bilayer membranes demonstrated generally higher elastic modulus, yield strength and toughness than single-layer membranes with random fibers. Bilayer membrane deformation at relatively high strain rates comprised multiple abrupt microfracture events characterized by discontinuous fiber breakage. Bilayer membrane elongation yielded excessive necking of the layer with random fibers and remarkable fiber stretching (on the order of 400%) in the layer with fibers aligned in the stress direction. In addition, fibers in both layers exhibited multiple localized necking, attributed to the nonuniform distribution of crystalline phases in the fibrillar structure. The high membrane porosity, good mechanical properties, and good biocompatibility and biodegradability of PLLA (demonstrated in previous studies) make the present bilayer membranes good scaffold candidates for a wide range of tissue engineering applications. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Mathematics of gravitational lensing: multiple imaging and magnification
NASA Astrophysics Data System (ADS)
Petters, A. O.; Werner, M. C.
2010-09-01
The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.
A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps
NASA Astrophysics Data System (ADS)
Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.
2017-06-01
The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.
Stochastic scheduling on a repairable manufacturing system
NASA Astrophysics Data System (ADS)
Li, Wei; Cao, Jinhua
1995-08-01
In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.
Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim
2009-01-01
This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078
Single-molecule dilution and multiple displacement amplification for molecular haplotyping.
Paul, Philip; Apgar, Josh
2005-04-01
Separate haploid analysis is frequently required for heterozygous genotyping to resolve phase ambiguity or confirm allelic sequence. We demonstrate a technique of single-molecule dilution followed by multiple strand displacement amplification to haplotype polymorphic alleles. Dilution of DNA to haploid equivalency, or a single molecule, is a simple method for separating di-allelic DNA. Strand displacement amplification is a robust method for non-specific DNA expansion that employs random hexamers and phage polymerase Phi29 for double-stranded DNA displacement and primer extension, resulting in high processivity and exceptional product length. Single-molecule dilution was followed by strand displacement amplification to expand separated alleles to microgram quantities of DNA for more efficient haplotype analysis of heterozygous genes.
ASSISTments Dataset from Multiple Randomized Controlled Experiments
ERIC Educational Resources Information Center
Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil
2016-01-01
In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…
Random matrices and condensation into multiple states
NASA Astrophysics Data System (ADS)
Sadeghi, Sina; Engel, Andreas
2018-03-01
In the present work, we employ methods from statistical mechanics of disordered systems to investigate static properties of condensation into multiple states in a general framework. We aim at showing how typical properties of random interaction matrices play a vital role in manifesting the statistics of condensate states. In particular, an analytical expression for the fraction of condensate states in the thermodynamic limit is provided that confirms the result of the mean number of coexisting species in a random tournament game. We also study the interplay between the condensation problem and zero-sum games with correlated random payoff matrices.
Sandroff, Brian M; Bollaert, Rachel E; Pilutti, Lara A; Peterson, Melissa L; Baynard, Tracy; Fernhall, Bo; McAuley, Edward; Motl, Robert W
2017-10-01
Mobility disability is a common, debilitating feature of multiple sclerosis (MS). Exercise training has been identified as an approach to improve MS-related mobility disability. However, exercise randomized controlled trials (RCTs) on mobility in MS have generally not selectively targeted those with the onset of irreversible mobility disability. The current multi-site RCT compared the efficacy of 6-months of supervised, multimodal exercise training with an active control condition for improving mobility, gait, physical fitness, and cognitive outcomes in persons with substantial MS-related mobility disability. 83 participants with substantial MS-related mobility disability underwent initial mobility, gait, fitness, and cognitive processing speed assessments and were randomly assigned to 6-months of supervised multimodal (progressive aerobic, resistance, and balance) exercise training (intervention condition) or stretching-and-toning activities (control condition). Participants completed the same outcome assessments halfway through and immediately following the 6-month study period. There were statistically significant improvements in six-minute walk performance (F(2158)=3.12, p=0.05, η p 2 =0.04), peak power output (F(2150)=8.16, p<0.01, η p 2 =0.10), and Paced Auditory Serial Addition Test performance (F(2162)=4.67, p=0.01, η p 2 =0.05), but not gait outcomes, for those who underwent the intervention compared with those who underwent the control condition. This RCT provides novel, preliminary evidence that multimodal exercise training may improve endurance walking performance and cognitive processing speed, perhaps based on improvements in cardiorespiratory capacity, in persons with MS with substantial mobility disability. This is critical for informing the development of multi-site exercise rehabilitation programs in larger samples of persons with MS-related mobility disability. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.
Anisimov, Vladimir V
2011-01-01
This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.
Kinetics of Aggregation with Choice
Ben-Naim, Eli; Krapivsky, Paul
2016-12-01
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
A random approach of test macro generation for early detection of hotspots
NASA Astrophysics Data System (ADS)
Lee, Jong-hyun; Kim, Chin; Kang, Minsoo; Hwang, Sungwook; Yang, Jae-seok; Harb, Mohammed; Al-Imam, Mohamed; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe
2016-03-01
Multiple-Patterning Technology (MPT) is still the preferred choice over EUV for the advanced technology nodes, starting the 20nm node. Down the way to 7nm and 5nm nodes, Self-Aligned Multiple Patterning (SAMP) appears to be one of the effective multiple patterning techniques in terms of achieving small pitch of printed lines on wafer, yet its yield is in question. Predicting and enhancing the yield in the early stages of technology development are some of the main objectives for creating test macros on test masks. While conventional yield ramp techniques for a new technology node have relied on using designs from previous technology nodes as a starting point to identify patterns for Design of Experiment (DoE) creation, these techniques are challenging to apply in the case of introducing an MPT technique like SAMP that did not exist in previous nodes. This paper presents a new strategy for generating test structures based on random placement of unit patterns that can construct more meaningful bigger patterns. Specifications governing the relationships between those unit patterns can be adjusted to generate layout clips that look like realistic SAMP designs. A via chain can be constructed to connect the random DoE of SAMP structures through a routing layer to external pads for electrical measurement. These clips are decomposed according to the decomposition rules of the technology into the appropriate mandrel and cut masks. The decomposed clips can be tested through simulations, or electrically on silicon to discover hotspots. The hotspots can be used in optimizing the fabrication process and models to fix them. They can also be used as learning patterns for DFM deck development. By expanding the size of the randomly generated test structures, more hotspots can be detected. This should provide a faster way to enhance the yield of a new technology node.
Random walk in nonhomogeneous environments: A possible approach to human and animal mobility
NASA Astrophysics Data System (ADS)
Srokowski, Tomasz
2017-03-01
The random walk process in a nonhomogeneous medium, characterized by a Lévy stable distribution of jump length, is discussed. The width depends on a position: either before the jump or after that. In the latter case, the density slope is affected by the variable width and the variance may be finite; then all kinds of the anomalous diffusion are predicted. In the former case, only the time characteristics are sensitive to the variable width. The corresponding Langevin equation with different interpretations of the multiplicative noise is discussed. The dependence of the distribution width on position after jump is interpreted in terms of cognitive abilities and related to such problems as migration in a human population and foraging habits of animals.
A random Q-switched fiber laser
Tang, Yulong; Xu, Jianqiu
2015-01-01
Extensive studies have been performed on random lasers in which multiple-scattering feedback is used to generate coherent emission. Q-switching and mode-locking are well-known routes for achieving high peak power output in conventional lasers. However, in random lasers, the ubiquitous random cavities that are formed by multiple scattering inhibit energy storage, making Q-switching impossible. In this paper, widespread Rayleigh scattering arising from the intrinsic micro-scale refractive-index irregularities of fiber cores is used to form random cavities along the fiber. The Q-factor of the cavity is rapidly increased by stimulated Brillouin scattering just after the spontaneous emission is enhanced by random cavity resonances, resulting in random Q-switched pulses with high brightness and high peak power. This report is the first observation of high-brightness random Q-switched laser emission and is expected to stimulate new areas of scientific research and applications, including encryption, remote three-dimensional random imaging and the simulation of stellar lasing. PMID:25797520
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Does hippotherapy effect use of sensory information for balance in people with multiple sclerosis?
Lindroth, Jodi L; Sullivan, Jessica L; Silkwood-Sherer, Debbie
2015-01-01
This case-series study aimed to determine if there were observable changes in sensory processing for postural control in individuals with multiple sclerosis (MS) following physical therapy using hippotherapy (HPOT), or changes in balance and functional gait. This pre-test non-randomized design study, with follow-up assessment at 6 weeks, included two females and one male (age range 37-60 years) with diagnoses of relapse-remitting or progressive MS. The intervention consisted of twelve 40-min physical therapy sessions which included HPOT twice a week for 6 weeks. Sensory organization and balance were assessed by the Sensory Organization Test (SOT) and Berg Balance Scale (BBS). Gait was assessed using the Functional Gait Assessment (FGA). Following the intervention period, all three participants showed improvements in SOT (range 1-8 points), BBS (range 2-6 points), and FGA (average 4 points) scores. These improvements were maintained or continued to improve at follow-up assessment. Two of the three participants no longer over-relied on vision and/or somatosensory information as the primary sensory input for postural control, suggesting improved use of sensory information for balance. The results indicate that HPOT may be a beneficial physical therapy treatment strategy to improve balance, functional gait, and enhance how some individuals with MS process sensory cues for postural control. Randomized clinical trials will be necessary to validate results of this study.
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations
2013-01-01
Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408
SUNPLIN: simulation with uncertainty for phylogenetic investigations.
Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F
2013-11-15
Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.
David, Ingrid; Garreau, Hervé; Balmisse, Elodie; Billon, Yvon; Canario, Laurianne
2017-01-20
Some genetic studies need to take into account correlations between traits that are repeatedly measured over time. Multiple-trait random regression models are commonly used to analyze repeated traits but suffer from several major drawbacks. In the present study, we developed a multiple-trait extension of the structured antedependence model (SAD) to overcome this issue and validated its usefulness by modeling the association between litter size (LS) and average birth weight (ABW) over parities in pigs and rabbits. The single-trait SAD model assumes that a random effect at time [Formula: see text] can be explained by the previous values of the random effect (i.e. at previous times). The proposed multiple-trait extension of the SAD model consists in adding a cross-antedependence parameter to the single-trait SAD model. This model can be easily fitted using ASReml and the OWN Fortran program that we have developed. In comparison with the random regression model, we used our multiple-trait SAD model to analyze the LS and ABW of 4345 litters from 1817 Large White sows and 8706 litters from 2286 L-1777 does over a maximum of five successive parities. For both species, the multiple-trait SAD fitted the data better than the random regression model. The difference between AIC of the two models (AIC_random regression-AIC_SAD) were equal to 7 and 227 for pigs and rabbits, respectively. A similar pattern of heritability and correlation estimates was obtained for both species. Heritabilities were lower for LS (ranging from 0.09 to 0.29) than for ABW (ranging from 0.23 to 0.39). The general trend was a decrease of the genetic correlation for a given trait between more distant parities. Estimates of genetic correlations between LS and ABW were negative and ranged from -0.03 to -0.52 across parities. No correlation was observed between the permanent environmental effects, except between the permanent environmental effects of LS and ABW of the same parity, for which the estimate of the correlation was strongly negative (ranging from -0.57 to -0.67). We demonstrated that application of our multiple-trait SAD model is feasible for studying several traits with repeated measurements and showed that it provided a better fit to the data than the random regression model.
Pilot of a computer-based brief multiple-health behavior intervention for college students.
Moore, Michele J; Werch, Chudley E; Bian, Hui
2012-01-01
Given the documented multiple health risks college students engage in, and the dearth of effective programs addressing them, the authors developed a computer-based brief multiple-health behavior intervention. This study reports immediate outcomes and feasibility of a pilot of this program. Two hundred students attending a midsized university participated. Participants were randomly assigned to the intervention or control program, both delivered via computer. Immediate feedback was collected with the computer program. Results indicate that the intervention had an early positive impact on alcohol and cigarette use intentions, as well as related constructs underlying the Behavior-Image Model specific to each of the 3 substances measured. Based on the implementation process, the program proved to be feasible to use and acceptable to the population. Results support the potential efficacy of the intervention to positively impact behavioral intentions and linkages between health promoting and damaging behaviors among college students.
Coday, Mace; Richey, Phyllis; Thomas, Fridtjof; Tran, Quynh T; Terrell, Sarah B; Tylavsky, Fran; Miro, Danielle; Caufield, Margaret; Johnson, Karen C
2016-04-15
Multiple recruitment strategies are often needed to recruit an adequate number of participants, especially hard to reach groups. Technology-based recruitment methods hold promise as a more robust form of reaching and enrolling historically hard to reach young adults. The TARGIT study is a randomized two-arm clinical trial in young adults using interactive technology testing an efficacious proactive telephone Quitline versus the Quitline plus a behavioral weight management intervention focusing on smoking cessation and weight change. All randomized participants in the TARGIT study were required to be a young adult smoker (18-35 years), who reported smoking at least 10 cigarettes per day, had a BMI < 40 kg/m 2, and were willing to stop smoking and not gain weight. Traditional recruitment methods were compared to technology-based strategies using standard descriptive statistics based on counts and proportions to describe the recruitment process from initial pre-screening (PS) to randomization into TARGIT. Participants at PS were majority Black (59.80%), female (52.66%), normal or over weight (combined 62.42%), 29.5 years old, and smoked 18.4 cigarettes per day. There were differences in men and women with respect to reasons for ineligibility during PS (p < 0.001; ignoring gender specific pregnancy-related ineligibility). TARGIT experienced a disproportionate loss of minorities during recruitment as well as a prolonged recruitment period due to either study ineligibility or not completing screening activities. Recruitment into longer term behavioral change intervention trials can be challenging and multiple methods are often required to recruit hard to reach groups.
Kuang, Zheng; Ji, Zhicheng; Boeke, Jef D; Ji, Hongkai
2018-01-09
Biological processes are usually associated with genome-wide remodeling of transcription driven by transcription factors (TFs). Identifying key TFs and their spatiotemporal binding patterns are indispensable to understanding how dynamic processes are programmed. However, most methods are designed to predict TF binding sites only. We present a computational method, dynamic motif occupancy analysis (DynaMO), to infer important TFs and their spatiotemporal binding activities in dynamic biological processes using chromatin profiling data from multiple biological conditions such as time-course histone modification ChIP-seq data. In the first step, DynaMO predicts TF binding sites with a random forests approach. Next and uniquely, DynaMO infers dynamic TF binding activities at predicted binding sites using their local chromatin profiles from multiple biological conditions. Another landmark of DynaMO is to identify key TFs in a dynamic process using a clustering and enrichment analysis of dynamic TF binding patterns. Application of DynaMO to the yeast ultradian cycle, mouse circadian clock and human neural differentiation exhibits its accuracy and versatility. We anticipate DynaMO will be generally useful for elucidating transcriptional programs in dynamic processes. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
ERIC Educational Resources Information Center
Smith, Douglas C.; Lanesskog, Deirdre; Cleeland, Leah; Motl, Robert; Weikert, Madeline; Dlugonski, Deirdre
2012-01-01
People with multiple sclerosis (MS) are likely to benefit from regular exercise, but physical inactivity is more common among people with MS than among the general population. This small randomized study evaluated whether motivational interviewing (MI) affects adherence to and personal experience in an exercise program. Inactive people with MS…
ERIC Educational Resources Information Center
van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas
2007-01-01
The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…
NASA Astrophysics Data System (ADS)
Biteau, J.; Giebels, B.
2012-12-01
Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, V P; Komyshan, A O; Smaliuk, A P
2013-12-31
It is shown that reabsorption of the luminescence radiation in the range of its overlapping with the absorption spectrum and the following reemission to a long-wavelength range may noticeably affect the process of stimulated Raman scattering (SRS) in polymethine dyes in multiple scattering media (MSM). This is related to the fact that SRS in such media occurs jointly with the random lasing (RL), which favors SRS and makes up with it a united nonlinear process. Reemission into the long-wavelength spectrum range amplified in MSM causes the RL spectrum to shift to longer wavelengths and initiates the long-wavelength band of RL,more » in which a main part of the lasing energy is concentrated. This weakens or completely stops the SRS if the band is beyond the range of possible spectral localisation of Stokes lines. This process depends on the efficiency of light scattering, dye concentration, temperature and pump intensity; hence, there exist optimal values of these parameters for obtaining SRS in MSM. (nonlinear optical phenomena)« less
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde
2015-10-01
This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimizing Urine Processing Protocols for Protein and Metabolite Detection.
Siddiqui, Nazema Y; DuBois, Laura G; St John-Williams, Lisa; Will, Thompson J; Grenier, Carole; Burke, Emily; Fraser, Matthew O; Amundsen, Cindy L; Murphy, Susan K
In urine, factors such as timing of voids, and duration at room temperature (RT) may affect the quality of recovered protein and metabolite data. Additives may aid with detection, but can add more complexity in sample collection or analysis. We aimed to identify the optimal urine processing protocol for clinically-obtained urine samples that allows for the highest protein and metabolite yields with minimal degradation. Healthy women provided multiple urine samples during the same day. Women collected their first morning (1 st AM) void and another "random void". Random voids were aliquotted with: 1) no additive; 2) boric acid (BA); 3) protease inhibitor (PI); or 4) both BA + PI. Of these aliquots, some were immediately stored at 4°C, and some were left at RT for 4 hours. Proteins and individual metabolites were quantified, normalized to creatinine concentrations, and compared across processing conditions. Sample pools corresponding to each processing condition were analyzed using mass spectrometry to assess protein degradation. Ten Caucasian women between 35-65 years of age provided paired 1 st morning and random voided urine samples. Normalized protein concentrations were slightly higher in 1 st AM compared to random "spot" voids. The addition of BA did not significantly change proteins, while PI significantly improved normalized protein concentrations, regardless of whether samples were immediately cooled or left at RT for 4 hours. In pooled samples, there were minimal differences in protein degradation under the various conditions we tested. In metabolite analyses, there were significant differences in individual amino acids based on the timing of the void. For comparative translational research using urine, information about void timing should be collected and standardized. For urine samples processed in the same day, BA does not appear to be necessary while the addition of PI enhances protein yields, regardless of 4°C or RT storage temperature.
NASA Astrophysics Data System (ADS)
Wang, Hongfeng; Fu, Yaping; Huang, Min; Wang, Junwei
2016-03-01
The operation process design is one of the key issues in the manufacturing and service sectors. As a typical operation process, the scheduling with consideration of the deteriorating effect has been widely studied; however, the current literature only studied single function requirement and rarely considered the multiple function requirements which are critical for a real-world scheduling process. In this article, two function requirements are involved in the design of a scheduling process with consideration of the deteriorating effect and then formulated into two objectives of a mathematical programming model. A novel multiobjective evolutionary algorithm is proposed to solve this model with combination of three strategies, i.e. a multiple population scheme, a rule-based local search method and an elitist preserve strategy. To validate the proposed model and algorithm, a series of randomly-generated instances are tested and the experimental results indicate that the model is effective and the proposed algorithm can achieve the satisfactory performance which outperforms the other state-of-the-art multiobjective evolutionary algorithms, such as nondominated sorting genetic algorithm II and multiobjective evolutionary algorithm based on decomposition, on all the test instances.
RCT of a Psychological Intervention for Patients With Cancer: I. Mechanisms of Change
Andersen, Barbara L.; Shelby, Rebecca A.; Golden-Kreutz, Deanna M.
2008-01-01
Little is known about the therapeutic processes contributing to efficacy of psychological interventions for patients with cancer. Data from a randomized clinical trial yielding robust biobehavioral and health effects (B. L. Andersen et al., 2004, 2007) were used to examine associations between process variables, treatment utilization, and outcomes. Novel findings emerged. Patients were highly satisfied with the treatment, but their higher levels of felt support (group cohesion) covaried with lower distress and fewer symptoms. Also, specific. treatment strategies were associated with specific outcomes, including lower distress, improved dietary habits, reduced symptomatology, and higher chemotherapy dose intensity. These data provide a comprehensive test of multiple therapeutic processes and mechanisms for biobehavioral change with an intervention including both intensive and maintenance phases. PMID:18085909
Analysis of bacterial migration. 2: Studies with multiple attractant gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, I.; Frymier, P.D.; Hahn, C.M.
1995-02-01
Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less
Gandolfi, Marialuisa; Munari, Daniele; Geroin, Christian; Gajofatto, Alberto; Benedetti, Maria Donata; Midiri, Alessandro; Carla, Fontana; Picelli, Alessandro; Waldner, Andreas; Smania, Nicola
2015-10-01
Impaired sensory integration contributes to balance disorders in patients with multiple sclerosis (MS). The objective of this paper is to compare the effects of sensory integration balance training against conventional rehabilitation on balance disorders, the level of balance confidence perceived, quality of life, fatigue, frequency of falls, and sensory integration processing on a large sample of patients with MS. This single-blind, randomized, controlled trial involved 80 outpatients with MS (EDSS: 1.5-6.0) and subjective symptoms of balance disorders. The experimental group (n = 39) received specific training to improve central integration of afferent sensory inputs; the control group (n = 41) received conventional rehabilitation (15 treatment sessions of 50 minutes each). Before, after treatment, and at one month post-treatment, patients were evaluated by a blinded rater using the Berg Balance Scale (BBS), Activities-specific Balance Confidence Scale (ABC), Multiple Sclerosis Quality of Life-54, Fatigue Severity Scale (FSS), number of falls and the Sensory Organization Balance Test (SOT). The experimental training program produced greater improvements than the control group training on the BBS (p < 0.001), the FSS (p < 0.002), number of falls (p = 0.002) and SOT (p < 0.05). Specific training to improve central integration of afferent sensory inputs may ameliorate balance disorders in patients with MS. Clinical Trial Registration (NCT01040117). © The Author(s), 2015.
Calibration of Predictor Models Using Multiple Validation Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.
ERIC Educational Resources Information Center
Sunyono; Yuanita, L.; Ibrahim, M.
2015-01-01
The aim of this research is identify the effectiveness of a multiple representation-based learning model, which builds a mental model within the concept of atomic structure. The research sample of 108 students in 3 classes is obtained randomly from among students of Mathematics and Science Education Studies using a stratified random sampling…
ERIC Educational Resources Information Center
Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel
2012-01-01
A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…
Robust Tomography using Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas
2013-03-01
Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.
Afrasiabifar, Ardashir; Mehri, Zahra; Javad Sadat, Saied; Ghaffarian Shirazi, Hamid Reza
2016-01-01
Background Orem’s self-care model is a nursing model that was introduced with the purpose of improving the self-care of individuals, especially patients suffering from chronic diseases. Objectives To determining the effect of Orem’s self-care model on fatigue in multiple sclerosis patients. Patients and Methods This research involved a clinical trial. Sixty-three multiple sclerosis patients at the vice-chancellor in treatment affairs of Yasuj University of Medical Sciences were selected based on nonrandom sampling, but they were allocated to the two groups based on random allocation. In the intervention group, Orem’s model was applied during six sessions of 45 - 60 minutes in length, and the process continued for 1 month. The data were collected 1 week before and 7 weeks after the end of the intervention using the Orem’s self-care model-based assessment form and fatigue severity scale, the validity and reliability of which have been Results Before the intervention, 11.11% of the participants had a good knowledge of self-care. In addition, self-care willingness and skills were observed in 76.19% and 4.76% of participants, respectively. The mean difference in fatigue reduced significantly in the intervention group after the intervention (P < 0.05). After the intervention, a statistically significant difference was observed in the mean difference of fatigue between the two groups (P < 0.05). Conclusions Orem’s self-care model is significantly effective in reducing the fatigue of multiple sclerosis patients. PMID:27781119
Edwards, Jerri D.; Ruva, Christine L.; O’Brien, Jennifer L.; Haley, Christine B.; Lister, Jennifer J.
2013-01-01
The purpose of these analyses was to examine mediators of the transfer of cognitive speed of processing training to improved everyday functional performance (Edwards, Wadley, Vance, Roenker, & Ball, 2005). Cognitive speed of processing and visual attention (as measured by the Useful Field of View Test; UFOV) were examined as mediators of training transfer. Secondary data analyses were conducted from the Staying Keen in Later Life (SKILL) study, a randomized cohort study including 126 community dwelling adults 63 to 87 years of age. In the SKILL study, participants were randomized to an active control group or cognitive speed of processing training (SOPT), a non-verbal, computerized intervention involving perceptual practice of visual tasks. Prior analyses found significant effects of training as measured by the UFOV and Timed Instrumental Activities of Daily Living (TIADL) Tests. Results from the present analyses indicate that speed of processing for a divided attention task significantly mediated the effect of SOPT on everyday performance (e.g., TIADL) in a multiple mediation model accounting for 91% of the variance. These findings suggest that everyday functional improvements found from SOPT are directly attributable to improved UFOV performance, speed of processing for divided attention in particular. Targeting divided attention in cognitive interventions may be important to positively affect everyday functioning among older adults. PMID:23066808
Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2016-01-01
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.
NASA Astrophysics Data System (ADS)
Starshynov, I.; Paniagua-Diaz, A. M.; Fayard, N.; Goetschy, A.; Pierrat, R.; Carminati, R.; Bertolotti, J.
2018-04-01
The propagation of monochromatic light through a scattering medium produces speckle patterns in reflection and transmission, and the apparent randomness of these patterns prevents direct imaging through thick turbid media. Yet, since elastic multiple scattering is fundamentally a linear and deterministic process, information is not lost but distributed among many degrees of freedom that can be resolved and manipulated. Here, we demonstrate experimentally that the reflected and transmitted speckle patterns are robustly correlated, and we unravel all the complex and unexpected features of this fundamentally non-Gaussian and long-range correlation. In particular, we show that it is preserved even for opaque media with thickness much larger than the scattering mean free path, proving that information survives the multiple scattering process and can be recovered. The existence of correlations between the two sides of a scattering medium opens up new possibilities for the control of transmitted light without any feedback from the target side, but using only information gathered from the reflected speckle.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Negative probability of random multiplier in turbulence
NASA Astrophysics Data System (ADS)
Bai, Xuan; Su, Weidong
2017-11-01
The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2017-08-01
Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
Sebastião, Emerson; McAuley, Edward; Shigematsu, Ryosuke; Motl, Robert W
2017-09-01
We propose a randomized controlled trial (RCT) examining the feasibility of square-stepping exercise (SSE) delivered as a home-based program for older adults with multiple sclerosis (MS). We will assess feasibility in the four domains of process, resources, management and scientific outcomes. The trial will recruit older adults (aged 60 years and older) with mild-to-moderate MS-related disability who will be randomized into intervention or attention control conditions. Participants will complete assessments before and after completion of the conditions delivered over a 12-week period. Participants in the intervention group will have biweekly meetings with an exercise trainer in the Exercise Neuroscience Research Laboratory and receive verbal and visual instruction on step patterns for the SSE program. Participants will receive a mat for home-based practice of the step patterns, an instruction manual, and a logbook and pedometer for monitoring compliance. Compliance will be further monitored through weekly scheduled Skype calls. This feasibility study will inform future phase II and III RCTs that determine the actual efficacy and effectiveness of a home-based exercise program for older adults with MS.
Constitutional chromothripsis involving the critical region of 9q21.13 microdeletion syndrome.
Genesio, Rita; Fontana, Paolo; Mormile, Angela; Casertano, Alberto; Falco, Mariateresa; Conti, Anna; Franzese, Adriana; Mozzillo, Enza; Nitsch, Lucio; Melis, Daniela
2015-01-01
The chromothripsis is a biological phenomenon, first observed in tumors and then rapidly described in congenital disorders. The principle of the chromothripsis process is the occurrence of a local shattering to pieces and rebuilding of chromosomes in a random order. Congenital chromothripsis rearrangements often involve reciprocal rearrangements on multiple chromosomes and have been described as cause of contiguous gene syndromes. We hypothesize that chromothripsis could be responsible for known 9q21.13 microdeletion syndrome, causing a composite phenotype with additional features. The case reported is a 16- years-old female with a complex genomic rearrangement of chromosome 9 including the critical region of 9q21.13 microdeletion syndrome. The patient presents with platelet disorder and thyroid dysfunction in addition to the classical neurobehavioral phenotype of the syndrome. The presence of multiple rearrangements on the same chromosome 9 and the rebuilding of chromosome in a random order suggested that the rearrangement could origin from an event of chromthripsis. To our knowledge this is the first report of congenital chromothripsis involving chromosome 9. Furthermore this is the only case of 9q21.13 microdeletion syndrome due to chromothripsis.
NASA Astrophysics Data System (ADS)
Eule, S.; Friedrich, R.
2013-03-01
Dynamical processes exhibiting non-Poissonian kinetics with nonexponential waiting times are frequently encountered in nature. Examples are biochemical processes like gene transcription which are known to involve multiple intermediate steps. However, often a second process, obeying Poissonian statistics, affects the first one simultaneously, such as the degradation of mRNA in the above example. The aim of the present article is to provide a concise treatment of such random systems which are affected by regular and non-Poissonian kinetics at the same time. We derive the governing master equation and provide a controlled approximation scheme for this equation. The simplest approximation leads to generalized reaction rate equations. For a simple model of gene transcription we solve the resulting equation and show how the time evolution is influenced significantly by the type of waiting time distribution assumed for the non-Poissonian process.
Subtyping cognitive profiles in Autism Spectrum Disorder using a Functional Random Forest algorithm.
Feczko, E; Balba, N M; Miranda-Dominguez, O; Cordova, M; Karalunas, S L; Irwin, L; Demeter, D V; Hill, A P; Langhorst, B H; Grieser Painter, J; Van Santen, J; Fombonne, E J; Nigg, J T; Fair, D A
2018-05-15
DSM-5 Autism Spectrum Disorder (ASD) comprises a set of neurodevelopmental disorders characterized by deficits in social communication and interaction and repetitive behaviors or restricted interests, and may both affect and be affected by multiple cognitive mechanisms. This study attempts to identify and characterize cognitive subtypes within the ASD population using our Functional Random Forest (FRF) machine learning classification model. This model trained a traditional random forest model on measures from seven tasks that reflect multiple levels of information processing. 47 ASD diagnosed and 58 typically developing (TD) children between the ages of 9 and 13 participated in this study. Our RF model was 72.7% accurate, with 80.7% specificity and 63.1% sensitivity. Using the random forest model, the FRF then measures the proximity of each subject to every other subject, generating a distance matrix between participants. This matrix is then used in a community detection algorithm to identify subgroups within the ASD and TD groups, and revealed 3 ASD and 4 TD putative subgroups with unique behavioral profiles. We then examined differences in functional brain systems between diagnostic groups and putative subgroups using resting-state functional connectivity magnetic resonance imaging (rsfcMRI). Chi-square tests revealed a significantly greater number of between group differences (p < .05) within the cingulo-opercular, visual, and default systems as well as differences in inter-system connections in the somato-motor, dorsal attention, and subcortical systems. Many of these differences were primarily driven by specific subgroups suggesting that our method could potentially parse the variation in brain mechanisms affected by ASD. Copyright © 2017. Published by Elsevier Inc.
Prescription-induced jump distributions in multiplicative Poisson processes.
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
Prescription-induced jump distributions in multiplicative Poisson processes
NASA Astrophysics Data System (ADS)
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
Garner, Bryan R.; Smith, Jane Ellen; Meyers, Robert J.; Godley, Mark D.
2010-01-01
Multiple evidence-based treatments for adolescents with substance use disorders are available; however, the diffusion of these treatments in practice remains minimal. A dissemination and implementation model incorporating research-based training components for simultaneous implementation across 33 dispersed sites and over 200 clinical staff is described. Key elements for the diffusion of the Adolescent Community Reinforcement Approach and Assertive Continuing Care were: (a) three years of funding to support local implementation; (b) comprehensive training, including a 3.5 day workshop, bi-weekly coaching calls, and ongoing performance feedback facilitated by a web tool; (c) a clinician certification process; (d) a supervisor certification process to promote long-term sustainability; and (e) random fidelity reviews after certification. Process data are summarized for 167 clinicians and 64 supervisors. PMID:21547241
Effects of absorption on multiple scattering by random particulate media: exact results.
Mishchenko, Michael I; Liu, Li; Hovenier, Joop W
2007-10-01
We employ the numerically exact superposition T-matrix method to perform extensive computations of elec nottromagnetic scattering by a volume of discrete random medium densely filled with increasingly absorbing as well as non-absorbing particles. Our numerical data demonstrate that increasing absorption diminishes and nearly extinguishes certain optical effects such as depolarization and coherent backscattering and increases the angular width of coherent backscattering patterns. This result corroborates the multiple-scattering origin of such effects and further demonstrates the heuristic value of the concept of multiple scattering even in application to densely packed particulate media.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
Lublin, Fred D; Bowen, James D; Huddlestone, John; Kremenchutzky, Marcelo; Carpenter, Adam; Corboy, John R; Freedman, Mark S; Krupp, Lauren; Paulo, Corri; Hariri, Robert J; Fischkoff, Steven A
2014-11-01
Infusion of PDA-001, a preparation of mesenchymal-like cells derived from full-term human placenta, is a new approach in the treatment of patients with multiple sclerosis. This safety study aimed to rule out the possibility of paradoxical exacerbation of disease activity by PDA-001 in patients with multiple sclerosis. This was a phase 1b, multicenter, randomized, double-blind, placebo-controlled, 2-dose ranging study including patients with relapsing-remitting multiple sclerosis or secondary progressive multiple sclerosis. The study was conducted at 6 sites in the United States and 2 sites in Canada. Patients were randomized 3:1 to receive 2 low-dose infusions of PDA-001 (150×10(6) cells) or placebo, given 1 week apart. After completing this cohort, subsequent patients received high-dose PDA-001 (600×10(6) cells) or placebo. Monthly brain magnetic resonance imaging scans were performed. The primary end point was ruling out the possibility of paradoxical worsening of MS disease activity. This was monitored using Cutter׳s rule (≥5 new gadolinium lesions on 2 consecutive scans) by brain magnetic resonance imaging on a monthly basis for six months and also the frequency of multiple sclerosis relapse. Ten patients with relapsing-remitting multiple sclerosis and 6 with secondary progressive multiple sclerosis were randomly assigned to treatment: 6 to low-dose PDA-001, 6 to high-dose PDA-001, and 4 to placebo. No patient met Cutter׳s rule. One patient receiving high-dose PDA-001 had an increase in T2 and gadolinium lesions and in Expanded Disability Status Scale score during a multiple sclerosis flare 5 months after receiving PDA-001. No other patient had an increase in Expanded Disability Status Scale score>0.5, and most had stable or decreasing Expanded Disability Status Scale scores. With high-dose PDA-001, 1 patient experienced a grade 1 anaphylactoid reaction and 1 had grade 2 superficial thrombophlebitis. Other adverse events were mild to moderate and included headache, fatigue, infusion site reactions, and urinary tract infection. PDA-001 infusions were safe and well tolerated in relapsing-remitting multiple sclerosis and secondary progressive multiple sclerosis patients. No paradoxical worsening of lesion counts was noted with either dose. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Cho, Hee Ju; Chung, Jae Hoon; Jo, Jung Ki; Kang, Dong Hyuk; Cho, Jeong Man; Yoo, Tag Keun; Lee, Seung Wook
2013-12-01
Randomized controlled trials are one of the most reliable resources for assessing the effectiveness and safety of medical treatments. Low quality randomized controlled trials carry a large bias that can ultimately impair the reliability of their conclusions. The present study aimed to evaluate the quality of randomized controlled trials published in International Journal of Urology by using multiple quality assessment tools. Randomized controlled trials articles published in International Journal of Urology were found using the PubMed MEDLINE database, and qualitative analysis was carried out with three distinct assessment tools: the Jadad scale, the van Tulder scale and the Cochrane Collaboration Risk of Bias Tool. The quality of randomized controlled trials was analyzed by publication year, type of subjects, intervention, presence of funding and whether an institutional review board reviewed the study. A total of 68 randomized controlled trial articles were published among a total of 1399 original articles in International Journal of Urology. Among these randomized controlled trials, 10 (2.70%) were from 1994 to 1999, 23 (4.10%) were from 2000 to 2005 and 35 (4.00%) were from 2006 to 2011 (P = 0.494). On the assessment with the Jadad and van Tulder scale, the numbers and percentage of high quality randomized controlled trials increased over time. The studies that had institutional review board reviews, funding resources or that were carried out in multiple institutions had an increased percentage of high quality articles. The numbers and percentage of high-quality randomized controlled trials published in International Journal of Urology have increased over time. Furthermore, randomized controlled trials with funding resources, institutional review board reviews or carried out in multiple institutions have been found to be of higher quality compared with others not presenting these features. © 2013 The Japanese Urological Association.
Schaafsma, Joanna D; van der Graaf, Yolanda; Rinkel, Gabriel J E; Buskens, Erik
2009-12-01
The lack of a standard methodology in diagnostic research impedes adequate evaluation before implementation of constantly developing diagnostic techniques. We discuss the methodology of diagnostic research and underscore the relevance of decision analysis in the process of evaluation of diagnostic tests. Overview and conceptual discussion. Diagnostic research requires a stepwise approach comprising assessment of test characteristics followed by evaluation of added value, clinical outcome, and cost-effectiveness. These multiple goals are generally incompatible with a randomized design. Decision-analytic models provide an important alternative through integration of the best available evidence. Thus, critical assessment of clinical value and efficient use of resources can be achieved. Decision-analytic models should be considered part of the standard methodology in diagnostic research. They can serve as a valid alternative to diagnostic randomized clinical trials (RCTs).
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Li, Dewei; Xi, Yugeng
2013-07-01
This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.
Performance analysis of replication ALOHA for fading mobile communications channels
NASA Technical Reports Server (NTRS)
Yan, Tsun-Yee; Clare, Loren P.
1986-01-01
This paper describes an ALOHA random access protocol for fading communications channels. A two-state Markov model is used for the channel error process to account for the channel fading memory. The ALOHA protocol is modified to send multiple contiguous copies of a message at each transmission attempt. Both pure and slotted ALOHA channels are considered. The analysis is applicable to fading environments where the channel memory is short compared to the propagation delay. It is shown that smaller delay may be achieved using replications and, in noisy conditions, can also improve throughput.
NASA Astrophysics Data System (ADS)
Li, Yun; Jiang, Hai; Lun, Zhiyuan; Wang, Yijiao; Huang, Peng; Hao, Hao; Du, Gang; Zhang, Xing; Liu, Xiaoyan
2016-04-01
Degradation behaviors in the high-k/metal gate stacks of nMOSFETs are investigated by three-dimensional (3D) kinetic Monte-Carlo (KMC) simulation with multiple trap coupling. Novel microscopic mechanisms are simultaneously considered in a compound system: (1) trapping/detrapping from/to substrate/gate; (2) trapping/detrapping to other traps; (3) trap generation and recombination. Interacting traps can contribute to random telegraph noise (RTN), bias temperature instability (BTI), and trap-assisted tunneling (TAT). Simulation results show that trap interaction induces higher probability and greater complexity in trapping/detrapping processes and greatly affects the characteristics of RTN and BTI. Different types of trap distribution cause largely different behaviors of RTN, BTI, and TAT. TAT currents caused by multiple trap coupling are sensitive to the gate voltage. Moreover, trap generation and recombination have great effects on the degradation of HfO2-based nMOSFETs under a large stress.
[Comparative study of cone-beam CT and spiral CT in measuring the length of styloid process].
Song, Y S; Liu, L F
2018-06-19
Objective: To compare the difference of measuring the length of styloid process between spiral CT with high resolution and cone-beam CT(CBCT). Methods: Five specimens (including 5 pairs of styloid processes) were selected randomly from the Anatomy Laboratory of Otolaryngology Department, all the specimens underwent spiral CT with high resolution and cone-beam CT retrospectively.With the original DICOM data, the styloid processes were shown in one plate by multiple plate reconstruction technique, and later the length of styloid processes of each specimen were measured separately by software NNT Viewer (to CBCT) or Osrix (to spiral CT with high resolution). Results: The length of styloid processes measured by CBCT and spiral CT was (26.8±5.5) mm and (27.1±5.4) mm respectively, and there was no statistical difference between the two groups. Conclusion: In respect of measuring the length of styloid process, the CBCT has the same value in clinical practice comparing to spiral CT with high resolution.
Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David
2014-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software. PMID:25221420
Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.
Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen
2013-02-01
In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2018-02-01
In psoriatic arthritis, it is important to understand the joint activity (represented by swelling and pain) and damage processes because both are related to severe physical disability. The paper aims to provide a comprehensive investigation into both processes occurring over time, in particular their relationship, by specifying a joint multistate model at the individual hand joint level, which also accounts for many of their important features. As there are multiple hand joints, such an analysis will be based on the use of clustered multistate models. Here we consider an observation level random-effects structure with dynamic covariates and allow for the possibility that a subpopulation of patients is at minimal risk of damage. Such an analysis is found to provide further understanding of the activity-damage relationship beyond that provided by previous analyses. Consideration is also given to the modelling of mean sojourn times and jump probabilities. In particular, a novel model parameterization which allows easily interpretable covariate effects to act on these quantities is proposed.
EDITORIAL: Special section on foliage penetration
NASA Astrophysics Data System (ADS)
Fiddy, M. A.; Lang, R.; McGahan, R. V.
2004-04-01
Waves in Random Media was founded in 1991 to provide a forum for papers dealing with electromagnetic and acoustic waves as they propagate and scatter through media or objects having some degree of randomness. This is a broad charter since, in practice, all scattering obstacles and structures have roughness or randomness, often on the scale of the wavelength being used to probe them. Including this random component leads to some quite different methods for describing propagation effects, for example, when propagating through the atmosphere or the ground. This special section on foliage penetration (FOPEN) focuses on the problems arising from microwave propagation through foliage and vegetation. Applications of such studies include the estimation for forest biomass and the moisture of the underlying soil, as well as detecting objects hidden therein. In addition to the so-called `direct problem' of trying to describe energy propagating through such media, the complementary inverse problem is of great interest and much harder to solve. The development of theoretical models and associated numerical algorithms for identifying objects concealed by foliage has applications in surveillance, ranging from monitoring drug trafficking to targeting military vehicles. FOPEN can be employed to map the earth's surface in cases when it is under a forest canopy, permitting the identification of objects or targets on that surface, but the process for doing so is not straightforward. There has been an increasing interest in foliage penetration synthetic aperture radar (FOPEN or FOPENSAR) over the last 10 years and this special section provides a broad overview of many of the issues involved. The detection, identification, and geographical location of targets under foliage or otherwise obscured by poor visibility conditions remains a challenge. In particular, a trade-off often needs to be appreciated, namely that diminishing the deleterious effects of multiple scattering from leaves is typically associated with a significant loss in target resolution. Foliage is more or less transparent to some radar frequencies, but longer wavelengths found in the VHF (30 to 300 MHz) and UHF (300 MHz to 3 GHz) portions of the microwave spectrum have more chance of penetrating foliage than do wavelengths at the X band (8 to 12 GHz). Reflection and multiple scattering occur for some other frequencies and models of the processes involved are crucial. Two topical reviews can be found in this issue, one on the microwave radiometry of forests (page S275) and another describing ionospheric effects on space-based radar (page S189). Subsequent papers present new results on modelling coherent backscatter from forests (page S299), modelling forests as discrete random media over a random interface (page S359) and interpreting ranging scatterometer data from forests (page S317). Cloude et al present research on identifying targets beneath foliage using polarimetric SAR interferometry (page S393) while Treuhaft and Siqueira use interferometric radar to describe forest structure and biomass (page S345). Vechhia et al model scattering from leaves (page S333) and Semichaevsky et al address the problem of the trade-off between increasing wavelength, reduction in multiple scattering, and target resolution (page S415).
Systematic review of multidisciplinary rehabilitation in patients with multiple trauma.
Khan, F; Amatya, B; Hoffman, K
2012-01-01
Multiple trauma is a cause of significant disability in adults of working age. Despite the implementation of trauma systems for improved coordination and organization of care, rehabilitation services are not yet routinely considered integral to trauma care processes. MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature, Allied and Complementary Medicine, Physiotherapy Evidence Database, Latin American and Caribbean Literature on Health Sciences and Cochrane Library databases were searched up to May 2011 for randomized clinical trials, as well as observational studies, reporting outcomes of injured patients following multidisciplinary rehabilitation that addressed functional restoration and societal reintegration based on the International Classification of Functioning, Disability and Health. No randomized and/or controlled clinical trials were identified. Fifteen observational studies involving 2386 participants with injuries were included. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach assessed methodological quality as 'poor' in all studies, with selection and observer bias. Although patients with low functional scores showed improvement after rehabilitation, they were unable to resume their pretrauma level of activity. Their functional ability was significantly associated with motor independence on admission and early acute rehabilitation, which contributed to a shorter hospital stay. Injury location, age, co-morbidity and education predicted long-term functional consequences. Trauma care systems were associated with reduced mortality. The gaps in evidence include: rehabilitation settings, components, intensity, duration and types of therapy, and long-term outcomes for survivors of multiple trauma. Rehabilitation is an expensive resource and the evidence to support its justification is needed urgently. The issues in study design and research methodology in rehabilitation are challenging. Opportunities to prioritize trauma rehabilitation, disability management and social reintegration of multiple injury survivors are discussed. Copyright © 2011 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.
Autocorrelation peaks in congruential pseudorandom number generators
NASA Technical Reports Server (NTRS)
Neuman, F.; Merrick, R. B.
1976-01-01
The complete correlation structure of several congruential pseudorandom number generators (PRNG) of the same type and small cycle length was studied to deal with the problem of congruential PRNG almost repeating themselves at intervals smaller than their cycle lengths, during simulation of bandpass filtered normal random noise. Maximum period multiplicative and mixed congruential generators were studied, with inferences drawn from examination of several tractable members of a class of random number generators, and moduli from 2 to the 5th power to 2 to the 9th power. High correlation is shown to exist in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle length. The random noise sequences in question are required when simulating electrical noise, air turbulence, or time variation of wind parameters.
Fernandez, Isabel Diana; Becerra, Adan; Chin, Nancy P
2014-06-01
Worksites provide multiple advantages to prevent and treat obesity and to test environmental interventions to tackle its multiple causal factors. We present a literature review of group-randomized and non-randomized trials that tested worksite environmental, multiple component interventions for obesity prevention and control paying particular attention to the conduct of formative research prior to intervention development. The evidence on environmental interventions on measures of obesity appears to be strong since most of the studies have a low (4/8) and unclear (2/8) risk of bias. Among the studies reviewed whose potential risk of bias was low, the magnitude of the effect was modest and sometimes in the unexpected direction. None of the four studies describing an explicit formative research stage with clear integration of findings into the intervention was able to demonstrate an effect on the main outcome of interest. We present alternative explanation for the findings and recommendations for future research.
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Hybrid colored noise process with space-dependent switching rates
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Lawley, Sean D.
2017-07-01
A fundamental issue in the theory of continuous stochastic process is the interpretation of multiplicative white noise, which is often referred to as the Itô-Stratonovich dilemma. From a physical perspective, this reflects the need to introduce additional constraints in order to specify the nature of the noise, whereas from a mathematical perspective it reflects an ambiguity in the formulation of stochastic differential equations (SDEs). Recently, we have identified a mechanism for obtaining an Itô SDE based on a form of temporal disorder. Motivated by switching processes in molecular biology, we considered a Brownian particle that randomly switches between two distinct conformational states with different diffusivities. In each state, the particle undergoes normal diffusion (additive noise) so there is no ambiguity in the interpretation of the noise. However, if the switching rates depend on position, then in the fast switching limit one obtains Brownian motion with a space-dependent diffusivity of the Itô form. In this paper, we extend our theory to include colored additive noise. We show that the nature of the effective multiplicative noise process obtained by taking both the white-noise limit (κ →0 ) and fast switching limit (ɛ →0 ) depends on the order the two limits are taken. If the white-noise limit is taken first, then we obtain Itô, and if the fast switching limit is taken first, then we obtain Stratonovich. Moreover, the form of the effective diffusion coefficient differs in the two cases. The latter result holds even in the case of space-independent transition rates, where one obtains additive noise processes with different diffusion coefficients. Finally, we show that yet another form of multiplicative noise is obtained in the simultaneous limit ɛ ,κ →0 with ɛ /κ2 fixed.
Summer School Effects in a Randomized Field Trial
ERIC Educational Resources Information Center
Zvoch, Keith; Stevens, Joseph J.
2013-01-01
This field-based randomized trial examined the effect of assignment to and participation in summer school for two moderately at-risk samples of struggling readers. Application of multiple regression models to difference scores capturing the change in summer reading fluency revealed that kindergarten students randomly assigned to summer school…
A Preliminary Investigation of a Randomized Dependent Group Contingency for Hallway Transitions
ERIC Educational Resources Information Center
Deshais, Meghan A.; Fisher, Alyssa B.; Kahng, SungWoo
2018-01-01
We conducted a preliminary investigation of a randomized dependent group contingency to decrease disruptive behavior during hallway transitions. Two first-graders, identified by their classroom teacher, participated in this study. A multiple baseline across transitions was used to evaluate the effects of the randomized dependent group contingency…
2014-01-01
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
Humphreys, Ioan; Drummond, Avril E R; Phillips, Ceri; Lincoln, Nadina B
2013-11-01
To evaluate the cost effectiveness of a psychological adjustment group shown to be clinically effective in comparison with usual care for people with multiple sclerosis. Randomized controlled trial with comparison of costs and calculation of incremental cost effectiveness ratio. Community. People with multiple sclerosis were screened on the General Health Questionnaire 12 and Hospital Anxiety and Depression Scale, and those with low mood were recruited. Participants randomly allocated to the adjustment group received six group treatment sessions. The control group received usual care, which did not include psychological interventions. Outcomes were assessed four and eight months after randomization, blind to group allocation. The costs were assessed from a service use questionnaire and information provided on medication. Quality of life was assessed using the EQ-5D. Of the 311 patients identified, 221 (71%) met the criteria for having low mood. Of these, 72 were randomly allocated to receive treatment and 79 to usual care. Over eight months follow-up there was a decrease in the combined average costs of £378 per intervention respondent and an increase in the costs of £297 per patient in the control group, which was a significant difference (p=0.03). The incremental cost-effectiveness ratio indicated that the cost per point reduction on the Beck depression inventory-II was £118. In the short term, the adjustment group programme was cost effective when compared with usual care, for people with multiple sclerosis presenting with low mood. The longer-term costs need to be assessed.
Multiple mechanisms of early plant community assembly with stochasticity driving the process.
Marteinsdóttir, Bryndís; Svavarsdóttir, Kristín; Thórhallsdóttir, Thóra Ellen
2018-01-01
Initial plant establishment is one of the most critical phases in ecosystem development, where an early suite of physical (environmental filtering), biological (seed limitation, species interactions) and stochastic factors may affect successional trajectories and rates. While functional traits are commonly used to study processes that influence plant community assembly in late successional communities, few studies have applied them to primary succession. The objective here was to determine the importance of these factors in shaping early plant community assembly on a glacial outwash plain, Skeiðarársandur, in SE Iceland using a trait based approach. We used data on vascular plant assemblages at two different spatial scales (community and neighborhood) sampled in 2005 and 2012, and compiled a dataset on seven functional traits linked to species dispersal abilities, establishment, and persistence for all species within these assemblages. Trait-based null model analyses were used to determine the processes that influenced plant community assembly from the regional species pool into local communities, and to determine if the importance of these processes in community assembly was dependent on local environment or changed with time. On the community scale, for most traits, random processes dominated the assembly from the regional species pool. However, in some communities, there was evidence of non-random assembly in relation to traits linked to species dispersal abilities, persistence, and establishment. On the neighborhood scale, assembly was mostly random. The relative importance of different processes varied spatially and temporally and the variation was linked to local soil conditions. While stochasticity dominated assembly patterns of our early successional communities, there was evidence of both seed limitation and environmental filtering. Our results indicated that as soil conditions improved, environmental constraints on assembly became weaker and the assembly became more dependent on species availability. © 2017 by the Ecological Society of America.
Multi-modal automatic montaging of adaptive optics retinal images
Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.
2016-01-01
We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714
Radiance and polarization of multiple scattered light from haze and clouds.
Kattawar, G W; Plass, G N
1968-08-01
The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.
Efficient two-dimensional compressive sensing in MIMO radar
NASA Astrophysics Data System (ADS)
Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad
2017-12-01
Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.
Contextual Interference in Complex Bimanual Skill Learning Leads to Better Skill Persistence
Pauwels, Lisa; Swinnen, Stephan P.; Beets, Iseult A. M.
2014-01-01
The contextual interference (CI) effect is a robust phenomenon in the (motor) skill learning literature. However, CI has yielded mixed results in complex task learning. The current study addressed whether the CI effect is generalizable to bimanual skill learning, with a focus on the temporal evolution of memory processes. In contrast to previous studies, an extensive training schedule, distributed across multiple days of practice, was provided. Participants practiced three frequency ratios across three practice days following either a blocked or random practice schedule. During the acquisition phase, better overall performance for the blocked practice group was observed, but this difference diminished as practice progressed. At immediate and delayed retention, the random practice group outperformed the blocked practice group, except for the most difficult frequency ratio. Our main finding is that the random practice group showed superior performance persistence over a one week time interval in all three frequency ratios compared to the blocked practice group. This study contributes to our understanding of learning, consolidation and memory of complex motor skills, which helps optimizing training protocols in future studies and rehabilitation settings. PMID:24960171
Liu, Zhengqi; Liu, Long; Lu, Haiyang; Zhan, Peng; Du, Wei; Wan, Mingjie; Wang, Zhenlin
2017-01-01
Recently, techniques involving random patterns have made it possible to control the light trapping of microstructures over broad spectral and angular ranges, which provides a powerful approach for photon management in energy efficiency technologies. Here, we demonstrate a simple method to create a wideband near-unity light absorber by introducing a dense and random pattern of metal-capped monodispersed dielectric microspheres onto an opaque metal film; the absorber works due to the excitation of multiple optical and plasmonic resonant modes. To further expand the absorption bandwidth, two different-sized metal-capped dielectric microspheres were integrated into a densely packed monolayer on a metal back-reflector. This proposed ultra-broadband plasmonic-photonic super absorber demonstrates desirable optical trapping in dielectric region and slight dispersion over a large incident angle range. Without any effort to strictly control the spatial arrangement of the resonant elements, our absorber, which is based on a simple self-assembly process, has the critical merits of high reproducibility and scalability and represents a viable strategy for efficient energy technologies. PMID:28256599
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Super-resolution processing for multi-functional LPI waveforms
NASA Astrophysics Data System (ADS)
Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao
2014-05-01
Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.
Genealogical Properties of Subsamples in Highly Fecund Populations
NASA Astrophysics Data System (ADS)
Eldon, Bjarki; Freund, Fabian
2018-03-01
We consider some genealogical properties of nested samples. The complete sample is assumed to have been drawn from a natural population characterised by high fecundity and sweepstakes reproduction (abbreviated HFSR). The random gene genealogies of the samples are—due to our assumption of HFSR—modelled by coalescent processes which admit multiple mergers of ancestral lineages looking back in time. Among the genealogical properties we consider are the probability that the most recent common ancestor is shared between the complete sample and the subsample nested within the complete sample; we also compare the lengths of `internal' branches of nested genealogies between different coalescent processes. The results indicate how `informative' a subsample is about the properties of the larger complete sample, how much information is gained by increasing the sample size, and how the `informativeness' of the subsample varies between different coalescent processes.
Threshold-based epidemic dynamics in systems with memory
NASA Astrophysics Data System (ADS)
Bodych, Marcin; Ganguly, Niloy; Krueger, Tyll; Mukherjee, Animesh; Siegmund-Schultze, Rainer; Sikdar, Sandipan
2016-11-01
In this article we analyze an epidemic dynamics model (SI) where we assume that there are k susceptible states, that is a node would require multiple (k) contacts before it gets infected. In specific, we provide a theoretical framework for studying diffusion rate in complete graphs and d-regular trees with extensions to dense random graphs. We observe that irrespective of the topology, the diffusion process could be divided into two distinct phases: i) the initial phase, where the diffusion process is slow, followed by ii) the residual phase where the diffusion rate increases manifold. In fact, the initial phase acts as an indicator for the total diffusion time in dense graphs. The most remarkable lesson from this investigation is that such a diffusion process could be controlled and even contained if acted upon within its initial phase.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-05-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
Rahbar, Mohammad H.; Wyatt, Gwen; Sikorskii, Alla; Victorson, David; Ardjomand-Hessabi, Manouchehr
2011-01-01
Background Multisite randomized clinical trials allow for increased research collaboration among investigators and expedite data collection efforts. As a result, government funding agencies typically look favorably upon this approach. As the field of complementary and alternative medicine (CAM) continues to evolve, so do increased calls for the use of more rigorous study design and trial methodologies, which can present challenges for investigators. Purpose To describe the processes involved in the coordination and management of a multisite randomized clinical trial of a CAM intervention. Methods Key aspects related to the coordination and management of a multisite CAM randomized clinical trial are presented, including organizational and site selection considerations, recruitment concerns and issues related to data collection and randomization to treatment groups. Management and monitoring of data, as well as quality assurance procedures are described. Finally, a real world perspective is shared from a recently conducted multisite randomized clinical trial of reflexology for women diagnosed with advanced breast cancer. Results The use of multiple sites in the conduct of CAM-based randomized clinical trials can provide an efficient, collaborative and robust approach to study coordination and data collection that maximizes efficiency and ensures the quality of results. Conclusions Multisite randomized clinical trial designs can offer the field of CAM research a more standardized and efficient approach to examine the effectiveness of novel therapies and treatments. Special attention must be given to intervention fidelity, consistent data collection and ensuring data quality. Assessment and reporting of quantitative indicators of data quality should be required. PMID:21664296
Random attractor of non-autonomous stochastic Boussinesq lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
Edwards, Jerri D; Ruva, Christine L; O'Brien, Jennifer L; Haley, Christine B; Lister, Jennifer J
2013-06-01
The purpose of these analyses was to examine mediators of the transfer of cognitive speed of processing training to improved everyday functional performance (J. D. Edwards, V. G. Wadley,, D. E. Vance, D. L. Roenker, & K. K. Ball, 2005, The impact of speed of processing training on cognitive and everyday performance. Aging & Mental Health, 9, 262-271). Cognitive speed of processing and visual attention (as measured by the Useful Field of View Test; UFOV) were examined as mediators of training transfer. Secondary data analyses were conducted from the Staying Keen in Later Life (SKILL) study, a randomized cohort study including 126 community dwelling adults 63 to 87 years of age. In the SKILL study, participants were randomized to an active control group or cognitive speed of processing training (SOPT), a nonverbal, computerized intervention involving perceptual practice of visual tasks. Prior analyses found significant effects of training as measured by the UFOV and Timed Instrumental Activities of Daily Living (TIADL) Tests. Results from the present analyses indicate that speed of processing for a divided attention task significantly mediated the effect of SOPT on everyday performance (e.g., TIADL) in a multiple mediation model accounting for 91% of the variance. These findings suggest that everyday functional improvements found from SOPT are directly attributable to improved UFOV performance, speed of processing for divided attention in particular. Targeting divided attention in cognitive interventions may be important to positively affect everyday functioning among older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Using histograms to introduce randomization in the generation of ensembles of decision trees
Kamath, Chandrika; Cantu-Paz, Erick; Littau, David
2005-02-22
A system for decision tree ensembles that includes a module to read the data, a module to create a histogram, a module to evaluate a potential split according to some criterion using the histogram, a module to select a split point randomly in an interval around the best split, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method includes the steps of reading the data; creating a histogram; evaluating a potential split according to some criterion using the histogram, selecting a split point randomly in an interval around the best split, splitting the data, and combining multiple decision trees in ensembles.
Active illuminated space object imaging and tracking simulation
NASA Astrophysics Data System (ADS)
Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu
2016-10-01
Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.
Optimal switching between geocentric and egocentric strategies in navigation
Mahadevan, L.
2016-01-01
Animals use a combination of egocentric navigation driven by the internal integration of environmental cues, interspersed with geocentric course correction and reorientation. These processes are accompanied by uncertainty in sensory acquisition of information, planning and execution. Inspired by observations of dung beetle navigational strategies that show switching between geocentric and egocentric strategies, we consider the question of optimal reorientation rates for the navigation of an agent moving along a preferred direction in the presence of multiple sources of noise. We address this using a model that takes the form of a correlated random walk at short time scales that is punctuated by reorientation events leading to a biased random walks at long time scales. This allows us to identify optimal alternation schemes and characterize their robustness in the context of noisy sensory acquisition as well as performance errors linked with variations in environmental conditions and agent–environment interactions. PMID:27493769
Force Limited Random Vibration Test of TESS Camera Mass Model
NASA Technical Reports Server (NTRS)
Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.
2015-01-01
The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.
Long-run growth rate in a random multiplicative model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirjol, Dan
2014-08-01
We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}₋1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the grand partition function of a one-dimensional lattice gas with two-body linear attractive interactions placed in a uniform field. We study the Lyapunov exponent λ=lim{sub n→∞}1/n log(x{sub n}), at fixed β=1/2 σ²t{sub n}n, and show that it is given by the equation of state of the lattice gas inmore » thermodynamical equilibrium. The Lyapunov exponent has discontinuous partial derivatives along a curve in the (ρ, β) plane ending at a critical point (ρ{sub C}, β{sub C}) which is related to a phase transition in the equivalent lattice gas. Using the equivalence of the lattice gas with a bosonic system, we obtain the exact solution for the equation of state in the thermodynamical limit n → ∞.« less
Murphy, Kellie E; Hannah, Mary E; Willan, Andrew R; Ohlsson, Arne; Kelly, Edmond N; Matthews, Stephen G; Saigal, Saroj; Asztalos, Elizabeth; Ross, Sue; Delisle, Marie-France; Tomat, Laura; Amankwah, Kofi; Guselle, Patricia; Gafni, Amiram; Lee, Shoo K; Armson, B Anthony
2011-09-01
A single course of antenatal corticosteroids (ACS) is associated with a reduction in respiratory distress syndrome and neonatal death. Multiple Courses of Antenatal Corticosteroids Study (MACS), a study involving 1858 women, was a multicentre randomized placebo-controlled trial of multiple courses of ACS, given every 14 days until 33+6 weeks or birth, whichever came first. The primary outcome of the study, a composite of neonatal mortality and morbidity, was similar for the multiple ACS and placebo groups (12.9% vs. 12.5%), but infants exposed to multiple courses of ACS weighed less, were shorter, and had smaller head circumferences. Thus for women who remain at increased risk of preterm birth, multiple courses of ACS (every 14 days) are not recommended. Chronic use of corticosteroids is associated with numerous side effects including weight gain and depression. The aim of this postpartum assessment was to ascertain if multiple courses of ACS were associated with maternal side effects. Three months postpartum, women who participated in MACS were asked to complete a structured questionnaire that asked about maternal side effects of corticosteroid use during MACS and included the Edinburgh Postnatal Depression Scale. Women were also asked to evaluate their study participation. Of the 1858 women randomized, 1712 (92.1%) completed the postpartum questionnaire. There were no significant differences in the risk of maternal side effects between the two groups. Large numbers of women met the criteria for postpartum depression (14.1% in the ACS vs. 16.0% in the placebo group). Most women (94.1%) responded that they would participate in the trial again. In pregnancy, corticosteroids are given to women for fetal lung maturation and for the treatment of various maternal diseases. In this international multicentre randomized controlled trial, multiple courses of ACS (every 14 days) were not associated with maternal side effects, and the majority of women responded that they would participate in such a study again.
A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications
NASA Astrophysics Data System (ADS)
Robin, J.; Tanter, M.; Pernot, M.
2017-09-01
Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
Frevel, D; Mäurer, M
2015-02-01
Balance disorders are common in multiple sclerosis. Aim of the study is to investigate the effectiveness of an Internet-based home training program (e-Training) to improve balance in patients with multiple sclerosis. A randomized, controlled study. Academic teaching hospital in cooperation with the therapeutic riding center Gut Üttingshof, Bad Mergentheim. Eighteen multiple sclerosis patients (mean EDSS 3,5) took part in the trial. Outcome of patients using e-Training (N.=9) was compared to the outcome of patients receiving hippotherapy (N.=9), which can be considered as an advanced concept for the improvement of balance and postural control in multiple sclerosis. After simple random allocation patients received hippotherapy or Internet-based home training (balance, postural control and strength training) twice a week for 12 weeks. Assessments were done before and after the intervention and included static and dynamic balance (primary outcome). Isometric muscle strength of the knee and trunk extension/flexion (dynamometer), walking capacity, fatigue and quality of life served as secondary outcome parameters. Both intervention groups showed comparable and highly significant improvement in static and dynamic balance capacity, no difference was seen between the both intervention groups. However looking at fatigue and quality of life only the group receiving hippotherapy improved significantly. Since e-Training shows even comparable effects to hippotherapy to improve balance, we believe that the established Internet-based home training program, specialized on balance and postural control training, is feasible for a balance and strength training in persons with multiple sclerosis. We demonstrated that Internet-based home training is possible in patients with multiple sclerosis.
Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling
ERIC Educational Resources Information Center
Lee, Taehun; Cai, Li
2012-01-01
Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…
Multiple Group Counseling with Discharged Schizophrenic Adolescents and their Parents.
ERIC Educational Resources Information Center
Lurie, Abraham; Harold, Ron
Discharged adolescent schizophrenics (17) and their families participated in a pilot program of multiple group counseling, planned to help ex-patients reintegrate into the community. Patients were selected prior to discharge and randomly divided into three multiple-family groups. Each participating family had had a severe breakdown in the…
Multiple Intelligences in Virtual and Traditional Skill Instructional Learning Environments
ERIC Educational Resources Information Center
McKethan, Robert; Rabinowitz, Erik; Kernodle, Michael W.
2010-01-01
The purpose of this investigation was to examine (a) how Multiple Intelligence (MI) strengths correlate to learning in virtual and traditional environments and (b) the effectiveness of learning with and without an authority figure in attendance. Participants (N=69) were randomly assigned to four groups, administered the Multiple Intelligences…
Lee, Minjung; Dignam, James J.; Han, Junhee
2014-01-01
We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Bayesian networks and information theory for audio-visual perception modeling.
Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis
2010-09-01
Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.
Auyeung, S Freda; Long, Qi; Royster, Erica Bruce; Murthy, Smitha; McNutt, Marcia D; Lawson, David; Miller, Andrew; Manatunga, Amita; Musselman, Dominique L
2009-10-01
Interferon-alpha therapy, which is used to treat metastatic malignant melanoma, can cause patients to develop two distinct neurobehavioral symptom complexes: a mood syndrome and a neurovegetative syndrome. Interferon-alpha effects on serotonin metabolism appear to contribute to the mood and anxiety syndrome, while the neurovegetative syndrome appears to be related to interferon-alpha effects on dopamine. Our goal is to propose a design for utilizing a sequential, multiple assignment, randomized trial design for patients with malignant melanoma to test the relative efficacy of drugs that target serotonin versus dopamine metabolism during 4 weeks of intravenous, then 8 weeks of subcutaneous, interferon-alpha therapy. Patients will be offered participation in a double-blinded, randomized, controlled, 14-week trial involving two treatment phases. During the first month of intravenous interferon-alpha therapy, we will test the hypotheses that escitalopram will be more effective in reducing depressed mood, anxiety, and irritability, whereas methylphenidate will be more effective in diminishing interferon-alpha-induced neurovegetative symptoms, such as fatigue and psychomotor slowing. During the next 8 weeks of subcutaneous interferon therapy, participants whose symptoms do not improve significantly will be randomized to the alternate agent alone versus escitalopram and methylphenidate together. We present a prototype for a single-center, sequential, multiple assignment, randomized trial, which seeks to determine the efficacy of sequenced and targeted treatment for the two distinct symptom complexes suffered by patients treated with interferon-alpha. Because we cannot completely control for external factors, a relevant question is whether or not 'short-term' neuropsychiatric interventions can increase the number of interferon-alpha doses tolerated and improve long-term survival. This sequential, multiple assignment, randomized trial proposes a framework for developing optimal treatment strategies; however, additional studies are needed to determine the best strategy for treating or preventing neurobehavioral symptoms induced by the immunotherapy interferon-alpha.
Three-Drug Combination for Relapsed Multiple Myeloma
A summary of Interim results from an international, randomized phase III trial that suggest that adding carfilzomib (Kyprolis®) to a standard treatment improves outcomes for patients with multiple myeloma whose cancer has relapsed.
Rogers, Geoffrey
2018-06-01
The Yule-Nielsen effect is an influence on halftone color caused by the diffusion of light within the paper upon which the halftone ink is printed. The diffusion can be characterized by a point spread function. In this paper, a point spread function for paper is derived using the multiple-path model of reflection. This model treats the interaction of light with turbid media as a random walk. Using the multiple-path point spread function, a general expression is derived for the average reflectance of light from a frequency-modulated halftone, in which dot size is constant and the number of dots is varied, with the arrangement of dots random. It is also shown that the line spread function derived from the multiple-path model has the form of a Lorentzian function.
Application of lifting wavelet and random forest in compound fault diagnosis of gearbox
NASA Astrophysics Data System (ADS)
Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi
2018-03-01
Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.
Effects of retinal eccentricity and acuity on global motion processing
Bower, Jeffrey D.; Bian, Zheng; Andersen, George J.
2012-01-01
The present study assessed direction discrimination of moving random dot cinematograms (RDCs) at retinal eccentricities of 0, 8, 22 and 40 deg. In addition, Landolt C acuity was assessed at these eccentricities to determine whether changes in motion discrimination performance covaried with acuity in the retinal periphery. The results of the experiment indicated that discrimination thresholds increased with retinal eccentricity and directional variance (noise) independent of acuity. Psychophysical modeling indicated that the results of eccentricity and noise could be explained by an increase in channel bandwidth and an increase in internal multiplicative noise. PMID:22382583
Time interval between successive trading in foreign currency market: from microscopic to macroscopic
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-12-01
Recently, it has been shown that inter-transaction interval (ITI) distribution of foreign currency rates has a fat tail. In order to understand the statistical property of the ITI dealer model with N interactive agents is proposed. From numerical simulations it is confirmed that the ITI distribution of the dealer model has a power law tail. The random multiplicative process (RMP) can be approximately derived from the ITI of the dealer model. Consequently, we conclude that the power law tail of the ITI distribution of the dealer model is a result of the RMP.
Identification of the structure parameters using short-time non-stationary stochastic excitation
NASA Astrophysics Data System (ADS)
Jarczewska, Kamila; Koszela, Piotr; Śniady, PaweŁ; Korzec, Aleksandra
2011-07-01
In this paper, we propose an approach to the flexural stiffness or eigenvalue frequency identification of a linear structure using a non-stationary stochastic excitation process. The idea of the proposed approach lies within time domain input-output methods. The proposed method is based on transforming the dynamical problem into a static one by integrating the input and the output signals. The output signal is the structure reaction, i.e. structure displacements due to the short-time, irregular load of random type. The systems with single and multiple degrees of freedom, as well as continuous systems are considered.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
Small, Latoya; Jackson, Jerrold; Gopalan, Geetha; McKay, Mary McKernan
2014-01-01
Youth living in poverty face compounding familial and environmental challenges in utilizing effective community mental health services. They have ongoing stressors that increase their dropout rate in mental health service use. Difficulties also exist in staying engaged in services when they are involved with the child welfare system. This study examines the 4Rs 2Ss Family Strengthening Program, developed across four broad conceptual categories related to parenting skills and family processes that form a multiple family group service delivery approach. A total of 321 families were enrolled in this randomized intervention study, assigned to either the 4Rs 2Ss Family Strengthening Program or standard care services. Caregivers and their children randomly assigned to the experimental condition received a 16 week multiple family group intervention through their respective outpatient community mental health clinic. Data was collected at baseline, midtest (8 weeks), posttest (16 weeks), and 6 month follow-up. Major findings include high engagement in the 4Rs 2Ss Family Strengthening Program, compared to standard services. Although child welfare status is not related to attendance, family stress and parental depression are also related to participant engagement in this multiple family group intervention. Involvement in the 4Rs 2Ss Family Strengthening Program resulted in improved effects for child behaviors. Lastly, no evidence of moderation effects on family stress, child welfare involvement, or parental needs were found. The 4Rs 2Ss Family Strengthening Program appeared able to engage families with more complex “real world” needs. PMID:26523115
Small, Latoya; Jackson, Jerrold; Gopalan, Geetha; McKay, Mary McKernan
2015-07-01
Youth living in poverty face compounding familial and environmental challenges in utilizing effective community mental health services. They have ongoing stressors that increase their dropout rate in mental health service use. Difficulties also exist in staying engaged in services when they are involved with the child welfare system. This study examines the 4Rs 2Ss Family Strengthening Program, developed across four broad conceptual categories related to parenting skills and family processes that form a multiple family group service delivery approach. A total of 321 families were enrolled in this randomized intervention study, assigned to either the 4Rs 2Ss Family Strengthening Program or standard care services. Caregivers and their children randomly assigned to the experimental condition received a 16 week multiple family group intervention through their respective outpatient community mental health clinic. Data was collected at baseline, midtest (8 weeks), posttest (16 weeks), and 6 month follow-up. Major findings include high engagement in the 4Rs 2Ss Family Strengthening Program, compared to standard services. Although child welfare status is not related to attendance, family stress and parental depression are also related to participant engagement in this multiple family group intervention. Involvement in the 4Rs 2Ss Family Strengthening Program resulted in improved effects for child behaviors. Lastly, no evidence of moderation effects on family stress, child welfare involvement, or parental needs were found. The 4Rs 2Ss Family Strengthening Program appeared able to engage families with more complex "real world" needs.
Jones, Rachael M; Stayner, Leslie T; Demirtas, Hakan
2014-10-01
Drinking water may contain pollutants that harm human health. The frequency of pollutant monitoring may occur quarterly, annually, or less frequently, depending upon the pollutant, the pollutant concentration, and community water system. However, birth and other health outcomes are associated with narrow time-windows of exposure. Infrequent monitoring impedes linkage between water quality and health outcomes for epidemiological analyses. To evaluate the performance of multiple imputation to fill in water quality values between measurements in community water systems (CWSs). The multiple imputation method was implemented in a simulated setting using data from the Atrazine Monitoring Program (AMP, 2006-2009 in five Midwestern states). Values were deleted from the AMP data to leave one measurement per month. Four patterns reflecting drinking water monitoring regulations were used to delete months of data in each CWS: three patterns were missing at random and one pattern was missing not at random. Synthetic health outcome data were created using a linear and a Poisson exposure-response relationship with five levels of hypothesized association, respectively. The multiple imputation method was evaluated by comparing the exposure-response relationships estimated based on multiply imputed data with the hypothesized association. The four patterns deleted 65-92% months of atrazine observations in AMP data. Even with these high rates of missing information, our procedure was able to recover most of the missing information when the synthetic health outcome was included for missing at random patterns and for missing not at random patterns with low-to-moderate exposure-response relationships. Multiple imputation appears to be an effective method for filling in water quality values between measurements. Copyright © 2014 Elsevier Inc. All rights reserved.
Magnetic MIMO Signal Processing and Optimization for Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Yang, Gang; Moghadam, Mohammad R. Vedady; Zhang, Rui
2017-06-01
In magnetic resonant coupling (MRC) enabled multiple-input multiple-output (MIMO) wireless power transfer (WPT) systems, multiple transmitters (TXs) each with one single coil are used to enhance the efficiency of simultaneous power transfer to multiple single-coil receivers (RXs) by constructively combining their induced magnetic fields at the RXs, a technique termed "magnetic beamforming". In this paper, we study the optimal magnetic beamforming design in a multi-user MIMO MRC-WPT system. We introduce the multi-user power region that constitutes all the achievable power tuples for all RXs, subject to the given total power constraint over all TXs as well as their individual peak voltage and current constraints. We characterize each boundary point of the power region by maximizing the sum-power deliverable to all RXs subject to their minimum harvested power constraints. For the special case without the TX peak voltage and current constraints, we derive the optimal TX current allocation for the single-RX setup in closed-form as well as that for the multi-RX setup. In general, the problem is a non-convex quadratically constrained quadratic programming (QCQP), which is difficult to solve. For the case of one single RX, we show that the semidefinite relaxation (SDR) of the problem is tight. For the general case with multiple RXs, based on SDR we obtain two approximate solutions by applying time-sharing and randomization, respectively. Moreover, for practical implementation of magnetic beamforming, we propose a novel signal processing method to estimate the magnetic MIMO channel due to the mutual inductances between TXs and RXs. Numerical results show that our proposed magnetic channel estimation and adaptive beamforming schemes are practically effective, and can significantly improve the power transfer efficiency and multi-user performance trade-off in MIMO MRC-WPT systems.
Demirol, Aygul; Gurgan, Timur
2009-08-01
To compare the efficacy of the microdose flare-up and multiple-dose antagonist protocols for poor-responder patients in intracytoplasmic sperm injection-ET cycles. A randomized, prospective study. Center for assisted reproductive technology in Turkey. Ninety patients with poor ovarian response in a minimum of two previous IVF cycles. All women were prospectively randomized into two groups by computer-assisted randomization. The patients in group 1 were stimulated according to the microdose flare-up protocol (n = 45), while the patients in group 2 were stimulated according to antagonist multiple-dose protocol (n = 45). The mean number of mature oocytes retrieved was the primary outcome measure, and fertilization rate, implantation rate per embryo, and clinical pregnancy rates were secondary outcome measures. The mean age of the women, the mean duration of infertility, basal FSH level, and the number of previous IVF cycles were similar in both groups. The total gonadotropin dose used was significantly higher in group 2, while the number of oocytes retrieved was significantly greater in group 1. Although the fertilization and clinical pregnancy rates were nonsignificantly higher in group 1 compared with group 2, the implantation rate was significantly higher in the microdose flare-up group than in the multiple-dose antagonist group (22% vs. 11%). The microdose flare-up protocol seems to have a better outcome in poor-responder patients, with a significantly higher mean number of mature oocytes retrieved and higher implantation rate.
Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.
Paul, Jacob M; Reeve, Robert A; Forte, Jason D
2017-03-01
We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.
Multiple-instance ensemble learning for hyperspectral images
NASA Astrophysics Data System (ADS)
Ergul, Ugur; Bilgin, Gokhan
2017-10-01
An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
The effect of multiple encounters on short period comet orbits
NASA Technical Reports Server (NTRS)
Lowrey, B. E.
1972-01-01
The observed orbital elements of short period comets are found to be consistent with the hypothesis of derivation from long period comets as long as two assumptions are made. First, the distribution of short period comets has been randomized by multiple encounters with Jupiter and second, the short period comets have lower velocities of encounter with Jupiter than is generally expected. Some 16% of the observed short period comets have lower encounter velocities than is allowed mathematically using Laplace's method. This may be due to double encounter processes with Jupiter and Saturn, or as a result of prolonged encounters. The distribution of unobservable short period comets can be inferred in part from the observed comets. Many have orbits between Jupiter and Saturn with somewhat higher inclinations than those with perihelions near the earth. Debris from those comets may form the major component of the zodiacal dust.
The fuzzy cube and causal efficacy: representation of concomitant mechanisms in stroke.
Jobe, Thomas H.; Helgason, Cathy M.
1998-04-01
Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon-stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.
Dose-response relationships in multifunctional food design: assembling the evidence.
Aggett, Peter J
2012-03-01
Demonstrating single and multiple functions attributable to foods or specific food components is a challenge. The International Life Sciences Institute Europe co-ordinated EU concerted actions, Functional Food Science in Europe (FUFOSE) and the Process for the Assessment of Scientific Support for Claims on Food (PASSCLAIM), respectively, addressed the soundness of the evidence and its coherence with a mechanistic schema comprising valid markers of exposure, intermediate and final outcomes and the quality and integrity of the evidence overall. Demonstrating causality often relies on randomized controlled trials (RCTs). However, in public health and biomedical science there is concern about the suitability of RCTs as sole standards of evidence-based approaches. Alternative and complementary approaches using updated Hill's viewpoints for appraising the evidence can be used in conjunction with evidence-based mechanistic reasoning and the quality criteria proposed in FUFOSE and PASSCLAIM to design studies and to assemble evidence exploring single or multiple benefits from food components and foods.
Resonant activation in a colored multiplicative thermal noise driven closed system.
Ray, Somrita; Mondal, Debasish; Bag, Bidhan Chandra
2014-05-28
In this paper, we have demonstrated that resonant activation (RA) is possible even in a thermodynamically closed system where the particle experiences a random force and a spatio-temporal frictional coefficient from the thermal bath. For this stochastic process, we have observed a hallmark of RA phenomena in terms of a turnover behavior of the barrier-crossing rate as a function of noise correlation time at a fixed noise variance. Variance can be fixed either by changing temperature or damping strength as a function of noise correlation time. Our another observation is that the barrier crossing rate passes through a maximum with increase in coupling strength of the multiplicative noise. If the damping strength is appreciably large, then the maximum may disappear. Finally, we compare simulation results with the analytical calculation. It shows that there is a good agreement between analytical and numerical results.
The correlation structure of several popular pseudorandom number generators
NASA Technical Reports Server (NTRS)
Neuman, F.; Merrick, R.; Martin, C. F.
1973-01-01
One of the desirable properties of a pseudorandom number generator is that the sequence of numbers it generates should have very low autocorrelation for all shifts except for zero shift and those that are multiples of its cycle length. Due to the simple methods of constructing random numbers, the ideal is often not quite fulfilled. A simple method of examining any random generator for previously unsuspected regularities is discussed. Once they are discovered it is often easy to derive the mathematical relationships, which describe the mathematical relationships, which describe the regular behavior. As examples, it is shown that high correlation exists in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle lengths.
Daniel-Filho, Durval Anibal; Pires, Elda Maria Stafuzza Gonçalves; Paes, Angela Tavares; Troster, Eduardo Juan; Silva, Simone Cristina Azevedo B S; Granato, Mariana Fachini; Couto, Thomaz Bittencourt; Barreto, Joyce Kelly Silva; Campos, Alexandre Holthausen; Monte, Julio C Martins; Schvartsman, Claudio
2017-10-01
Evaluation of non-cognitive skills never has been used in Brazil. This study aims to evaluate Multiple Mini Interviews (MMI) in the admission process of a School of Medicine in São Paulo, Brazil. The population of the study comprised 240 applicants summoned for the interviews, and 96 raters. MMI contributed to 25% of the applicants' final grade. Eight scenarios were created with the aim of evaluating different non-cognitive skills, each one had two raters. At the end of the interviews, the applicants and raters described their impressions about MMI. The reliability of the MMI was analyzed using the Theory of Generalization and Many-Facet Rasch Model (MFRM). The G-study showed that the general reliability of the process was satisfactory (coefficient G = 0.743). The MMI grades were not affected by the raters' profile, time of interview (p = 0.715), and randomization group (p = 0.353). The Rasch analysis showed that there was no misfitting effects or inconsistent stations or raters. A significant majority of the applicants (98%) and all the raters believed MMIs were important in selecting students with a more adequate profile to study medicine. The general reliability of the selection process was excellent, and it was fully accepted by the applicants and raters.
Thermally assisted nanosecond laser generation of ferric nanoparticles
NASA Astrophysics Data System (ADS)
Kurselis, K.; Kozheshkurt, V.; Kiyan, R.; Chichkov, B.; Sajti, L.
2018-03-01
A technique to increase nanosecond laser based production of ferric nanoparticles by elevating temperature of the iron target and controlling its surface exposure to oxygen is reported. High power near-infrared laser ablation of the iron target heated up to 600 °C enhances the particle generation efficiency by more than tenfold exceeding 6 μg/J. Temporal and thermal dependencies of the particle generation process indicate correlation of this enhancement with the oxidative processes that take place on the iron surface during the per spot interpulse delay. Nanoparticles, produced using the heat-assisted ablation technique, are examined using scanning electron and transmission electron microscopy confirming the presence of 1-100 nm nanoparticles with an exponential size distribution that contain multiple randomly oriented magnetite nanocrystallites. The described process enables the application of high power lasers and facilitates precise, uniform, and controllable direct deposition of ferric nanoparticle coatings at the industry-relevant rates.
Probabilistic measures of persistence and extinction in measles (meta)populations.
Gunning, Christian E; Wearing, Helen J
2013-08-01
Persistence and extinction are fundamental processes in ecological systems that are difficult to accurately measure due to stochasticity and incomplete observation. Moreover, these processes operate on multiple scales, from individual populations to metapopulations. Here, we examine an extensive new data set of measles case reports and associated demographics in pre-vaccine era US cities, alongside a classic England & Wales data set. We first infer the per-population quasi-continuous distribution of log incidence. We then use stochastic, spatially implicit metapopulation models to explore the frequency of rescue events and apparent extinctions. We show that, unlike critical community size, the inferred distributions account for observational processes, allowing direct comparisons between metapopulations. The inferred distributions scale with population size. We use these scalings to estimate extinction boundary probabilities. We compare these predictions with measurements in individual populations and random aggregates of populations, highlighting the importance of medium-sized populations in metapopulation persistence. © 2013 John Wiley & Sons Ltd/CNRS.
Statistical patterns of visual search for hidden objects
Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.
2012-01-01
The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829
Applying Agrep to r-NSA to solve multiple sequences approximate matching.
Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak
2014-01-01
This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.
Collective effects in force generation by multiple cytoskeletal filaments pushing an obstacle
NASA Astrophysics Data System (ADS)
Aparna, J. S.; Das, Dipjyoti; Padinhateeri, Ranjith; Das, Dibyendu
2015-09-01
We report here recent findings that multiple cytoskeletal filaments (assumed rigid) pushing an obstacle typically generate more force than just the sum of the forces due to individual ones. This interesting phenomenon, due to the hydrolysis process being out of equilibrium, escaped attention in previous experimental and theoretical literature. We first demonstrate this numerically within a constant force ensemble, for a well known model of cytoskeletal filament dynamics with random mechanism of hydrolysis. Two methods of detecting the departure from additivity of the collective stall force, namely from the force-velocity curve in the growing phase, and from the average collapse time versus force curve in the bounded phase, is discussed. Since experiments have already been done for a similar system of multiple microtubules in a harmonic optical trap, we study the problem theoretically under harmonic force. We show that within the varying harmonic force ensemble too, the mean collective stall force of N filaments is greater than N times the mean stall force due to a single filament; the actual extent of departure is a function of the monomer concentration.
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui
2017-01-01
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui
2017-05-04
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.
Worker and mother roles, spillover effects, and psychological distress.
Barnett, R C; Marshall, N L
1992-01-01
This paper examines the relationship between the occupancy and quality of multiple-roles and psychological distress in a stratified random sample of 403 women employed as licensed practical nurses and social workers. We examined the main effects of the quality of the employee and parent roles and the interaction effects between these variables. Negative- and positive-spillover effects, from job to parenting and from parenting to job, were examined in an attempt to illuminate the processes by which multiple roles affect employed mothers' vulnerability or resilience to psychological distress. We found no negative-spillover effects from job to parenting or from parenting to job, but we did find positive-spillover effects from job to parenting. Women with rewarding jobs were protected from the negative mental-health effects of troubled relationships with their children. This protection accrued to employed mothers regardless of their partnership status or the age of their children. Although based on cross-sectional analyses, these findings suggest mechanisms by which employed mothers reap a mental-health advantage from multiple roles, even when some of those roles are stressful.
Evaluation of Data Retention Characteristics for Ferroelectric Random Access Memories (FRAMs)
NASA Technical Reports Server (NTRS)
Sharma, Ashok K.; Teverovsky, Alexander
2001-01-01
Data retention and fatigue characteristics of 64 Kb lead zirconate titanate (PZT)-based Ferroelectric Random Access Memories (FRAMs) microcircuits manufactured by Ramtron were examined over temperature range from -85 C to +310 C for ceramic packaged parts and from -85 C to +175 C for plastic parts, during retention periods up to several thousand hours. Intrinsic failures, which were caused by a thermal degradation of the ferroelectric cells, occurred in ceramic parts after tens or hundreds hours of aging at temperatures above 200 C. The activation energy of the retention test failures was 1.05 eV and the extrapolated mean-time-to-failure (MTTF) at room temperature was estimated to be more than 280 years. Multiple write-read cycling (up to 3x10(exp 7)) during the fatigue testing of plastic and ceramic parts did not result in any parametric or functional failures. However, operational currents linearly decreased with the logarithm of number of cycles thus indicating fatigue process in PZT films. Plastic parts, that had more recent date code as compared to ceramic parts, appeared to be using die with improved process technology and showed significantly smaller changes in operational currents and data access times.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
NASA Astrophysics Data System (ADS)
Nakajima, Ryo; Azuma, Atsushi; Yoshida, Hayato; Shimizu, Tomohiro; Ito, Takeshi; Shingubara, Shoso
2018-06-01
Resistive random access memory (ReRAM) devices with a HfO2 dielectric layer have been studied extensively owing to the good reproducibility of their SET/RESET switching properties. Furthermore, it was reported that a thin Hf layer next to a HfO2 layer stabilized switching properties because of the oxygen scavenging effect. In this work, we studied the Hf thickness dependence of the resistance switching characteristics of a Ti/Hf/HfO2/Au ReRAM device. It is found that the optimum Hf thickness is approximately 10 nm to obtain good reproducibility of SET/RESET voltages with a small RESET current. However, when the Hf thickness was very small (∼2 nm), the device failed after the first RESET process owing to the very large RESET current. In the case of a very thick Hf layer (∼20 nm), RESET did not occur owing to the formation of a leaky dielectric layer. We observed the occurrence of multiple resistance states in the RESET process of the device with a Hf thickness of 10 nm by increasing the RESET voltage stepwise.
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Leavitt, V M; Cirnigliaro, C; Cohen, A; Farag, A; Brooks, M; Wecht, J M; Wylie, G R; Chiaravalloti, N D; DeLuca, J; Sumowski, J F
2014-01-01
Multiple sclerosis leads to prominent hippocampal atrophy, which is linked to memory deficits. Indeed, 50% of multiple sclerosis patients suffer memory impairment, with negative consequences for quality of life. There are currently no effective memory treatments for multiple sclerosis either pharmacological or behavioral. Aerobic exercise improves memory and promotes hippocampal neurogenesis in nonhuman animals. Here, we investigate the benefits of aerobic exercise in memory-impaired multiple sclerosis patients. Pilot data were collected from two ambulatory, memory-impaired multiple sclerosis participants randomized to non-aerobic (stretching) and aerobic (stationary cycling) conditions. The following baseline/follow-up measurements were taken: high-resolution MRI (neuroanatomical volumes), fMRI (functional connectivity), and memory assessment. Intervention was 30-minute sessions 3 times per week for 3 months. Aerobic exercise resulted in 16.5% increase in hippocampal volume and 53.7% increase in memory, as well as increased hippocampal resting-state functional connectivity. Improvements were specific, with no comparable changes in overall cerebral gray matter (+2.4%), non-hippocampal deep gray matter structures (thalamus, caudate: -4.0%), or in non-memory cognitive functioning (executive functions, processing speed, working memory: changes ranged from -11% to +4%). Non-aerobic exercise resulted in relatively no change in hippocampal volume (2.8%) or memory (0.0%), and no changes in hippocampal functional connectivity. This is the first evidence for aerobic exercise to increase hippocampal volume and connectivity and improve memory in multiple sclerosis. Aerobic exercise represents a cost-effective, widely available, natural, and self-administered treatment with no adverse side effects that may be the first effective memory treatment for multiple sclerosis patients.
Frequency of RNA–RNA interaction in a model of the RNA World
STRIGGLES, JOHN C.; MARTIN, MATTHEW B.; SCHMIDT, FRANCIS J.
2006-01-01
The RNA World model for prebiotic evolution posits the selection of catalytic/template RNAs from random populations. The mechanisms by which these random populations could be generated de novo are unclear. Non-enzymatic and RNA-catalyzed nucleic acid polymerizations are poorly processive, which means that the resulting short-chain RNA population could contain only limited diversity. Nonreciprocal recombination of smaller RNAs provides an alternative mechanism for the assembly of larger species with concomitantly greater structural diversity; however, the frequency of any specific recombination event in a random RNA population is limited by the low probability of an encounter between any two given molecules. This low probability could be overcome if the molecules capable of productive recombination were redundant, with many nonhomologous but functionally equivalent RNAs being present in a random population. Here we report fluctuation experiments to estimate the redundancy of the set of RNAs in a population of random sequences that are capable of non-Watson-Crick interaction with another RNA. Parallel SELEX experiments showed that at least one in 106 random 20-mers binds to the P5.1 stem–loop of Bacillus subtilis RNase P RNA with affinities equal to that of its naturally occurring partner. This high frequency predicts that a single RNA in an RNA World would encounter multiple interacting RNAs within its lifetime, supporting recombination as a plausible mechanism for prebiotic RNA evolution. The large number of equivalent species implies that the selection of any single interacting species in the RNA World would be a contingent event, i.e., one resulting from historical accident. PMID:16495233
Multi-agent coordination in directed moving neighbourhood random networks
NASA Astrophysics Data System (ADS)
Shang, Yi-Lun
2010-07-01
This paper considers the consensus problem of dynamical multiple agents that communicate via a directed moving neighbourhood random network. Each agent performs random walk on a weighted directed network. Agents interact with each other through random unidirectional information flow when they coincide in the underlying network at a given instant. For such a framework, we present sufficient conditions for almost sure asymptotic consensus. Numerical examples are taken to show the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Bulanov, S. V.; Esirkepov, T. Zh.; Koga, J. K.; Bulanov, S. S.; Gong, Z.; Yan, X. Q.; Kando, M.
2017-04-01
The multiple colliding laser pulse concept formulated by Bulanov et al. (Phys. Rev. Lett., vol. 104, 2010b, 220404) is beneficial for achieving an extremely high amplitude of coherent electromagnetic field. Since the topology of electric and magnetic fields of multiple colliding laser pulses oscillating in time is far from trivial and the radiation friction effects are significant in the high field limit, the dynamics of charged particles interacting with the multiple colliding laser pulses demonstrates remarkable features corresponding to random walk trajectories, limit circles, attractors, regular patterns and Lévy flights. Under extremely high intensity conditions the nonlinear dissipation mechanism stabilizes the particle motion resulting in the charged particle trajectory being located within narrow regions and in the occurrence of a new class of regular patterns made by the particle ensembles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulanov, S. V.; Esirkepov, T. Zh.; Koga, J. K.
The multiple colliding laser pulse concept formulated by Bulanovet al.(Phys. Rev. Lett., vol. 104, 2010b, 220404) is beneficial for achieving an extremely high amplitude of coherent electromagnetic field. Since the topology of electric and magnetic fields of multiple colliding laser pulses oscillating in time is far from trivial and the radiation friction effects are significant in the high field limit, the dynamics of charged particles interacting with the multiple colliding laser pulses demonstrates remarkable features corresponding to random walk trajectories, limit circles, attractors, regular patterns and Lévy flights. Lastly, under extremely high intensity conditions the nonlinear dissipation mechanism stabilizes the particle motionmore » resulting in the charged particle trajectory being located within narrow regions and in the occurrence of a new class of regular patterns made by the particle ensembles.« less
Bulanov, S. V.; Esirkepov, T. Zh.; Koga, J. K.; ...
2017-03-09
The multiple colliding laser pulse concept formulated by Bulanovet al.(Phys. Rev. Lett., vol. 104, 2010b, 220404) is beneficial for achieving an extremely high amplitude of coherent electromagnetic field. Since the topology of electric and magnetic fields of multiple colliding laser pulses oscillating in time is far from trivial and the radiation friction effects are significant in the high field limit, the dynamics of charged particles interacting with the multiple colliding laser pulses demonstrates remarkable features corresponding to random walk trajectories, limit circles, attractors, regular patterns and Lévy flights. Lastly, under extremely high intensity conditions the nonlinear dissipation mechanism stabilizes the particle motionmore » resulting in the charged particle trajectory being located within narrow regions and in the occurrence of a new class of regular patterns made by the particle ensembles.« less
Yavorska, Olena O; Burgess, Stephen
2017-12-01
MendelianRandomization is a software package for the R open-source software environment that performs Mendelian randomization analyses using summarized data. The core functionality is to implement the inverse-variance weighted, MR-Egger and weighted median methods for multiple genetic variants. Several options are available to the user, such as the use of robust regression, fixed- or random-effects models and the penalization of weights for genetic variants with heterogeneous causal estimates. Extensions to these methods, such as allowing for variants to be correlated, can be chosen if appropriate. Graphical commands allow summarized data to be displayed in an interactive graph, or the plotting of causal estimates from multiple methods, for comparison. Although the main method of data entry is directly by the user, there is also an option for allowing summarized data to be incorporated from the PhenoScanner database of genotype-phenotype associations. We hope to develop this feature in future versions of the package. The R software environment is available for download from [https://www.r-project.org/]. The MendelianRandomization package can be downloaded from the Comprehensive R Archive Network (CRAN) within R, or directly from [https://cran.r-project.org/web/packages/MendelianRandomization/]. Both R and the MendelianRandomization package are released under GNU General Public Licenses (GPL-2|GPL-3). © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.
Random matrices with external source and the asymptotic behaviour of multiple orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aptekarev, Alexander I; Lysov, Vladimir G; Tulyakov, Dmitrii N
2011-02-28
Ensembles of random Hermitian matrices with a distribution measure defined by an anharmonic potential perturbed by an external source are considered. The limiting characteristics of the eigenvalue distribution of the matrices in these ensembles are related to the asymptotic behaviour of a certain system of multiple orthogonal polynomials. Strong asymptotic formulae are derived for this system. As a consequence, for matrices in this ensemble the limit mean eigenvalue density is found, and a variational principle is proposed to characterize this density. Bibliography: 35 titles.
NASA Astrophysics Data System (ADS)
Mizutani, Tomoko; Takeuchi, Kiyoshi; Saraya, Takuya; Kobayashi, Masaharu; Hiramoto, Toshiro
2018-04-01
We propose a new version of the post fabrication static random access memory (SRAM) self-improvement technique, which utilizes multiple stress application. It is demonstrated that, using a device matrix array (DMA) test element group (TEG) with intrinsic channel fully depleted (FD) silicon-on-thin-buried-oxide (SOTB) six-transistor (6T) SRAM cells fabricated by the 65 nm technology, the lowering of data retention voltage (DRV) is more effectively achieved than using the previously proposed single stress technique.
Critical appraisal of clinical trials in multiple system atrophy: Toward better quality.
Castro Caldas, Ana; Levin, Johannes; Djaldetti, Ruth; Rascol, Olivier; Wenning, Gregor; Ferreira, Joaquim J
2017-10-01
Multiple system atrophy (MSA) is a rare neurodegenerative disease of undetermined cause. Although many clinical trials have been conducted, there is still no treatment that cures the disease or slows its progression. We sought to assess the clinical trials, methodology, and quality of reporting of clinical trails conducted in MSA patients. We conducted a systematic review of all trials with at least 1 MSA patient subject to any pharmacological/nonpharmacological interventions. Two independent reviewers evaluated the methodological characteristics and quality of reporting of trials. A total of 60 clinical trials were identified, including 1375 MSA patients. Of the trials, 51% (n = 31) were single-arm studies. A total of 28% (n = 17) had a parallel design, half of which (n = 13) were placebo controlled. Of the studies, 8 (13.3%) were conducted in a multicenter setting, 3 of which were responsible for 49.3% (n = 678) of the total included MSA patients. The description of primary outcomes was unclear in 60% (n = 40) of trials. Only 10 (16.7%) clinical trials clearly described the randomization process. Blinding of the participants, personnel, and outcome assessments were at high risk of bias in the majority of studies. The number of dropouts/withdrawals was high (n = 326, 23.4% among the included patients). Overall, the design and quality of reporting of the reviewed studies is unsatisfactory. The most frequent clinical trials were small and single centered. Inadequate reporting was related to the information on the randomization process, sequence generation, allocation concealment, blinding of participants, and sample size calculations. Although improved during the recent years, methodological quality and trial design need to be optimized to generate more informative results. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
Fathima, Mariam; Peiris, David; Naik-Panvelkar, Pradnya; Saini, Bandana; Armour, Carol Lyn
2014-12-02
The use of computerized clinical decision support systems may improve the diagnosis and ongoing management of chronic diseases, which requires recurrent visits to multiple health professionals, disease and medication monitoring and modification of patient behavior. The aim of this review was to systematically review randomized controlled trials evaluating the effectiveness of computerized clinical decision systems (CCDSS) in the care of people with asthma and COPD. Randomized controlled trials published between 2003 and 2013 were searched using multiple electronic databases Medline, EMBASE, CINAHL, IPA, Informit, PsychINFO, Compendex, and Cochrane Clinical Controlled Trials Register databases. To be included, RCTs had to evaluate the role of the CCDSSs for asthma and/or COPD in primary care. Nineteen studies representing 16 RCTs met our inclusion criteria. The majority of the trials were conducted in patients with asthma. Study quality was generally high. Meta-analysis was not conducted because of methodological and clinical heterogeneity. The use of CCDSS improved asthma and COPD care in 14 of the 19 studies reviewed (74%). Nine of the nineteen studies showed statistically significant (p < 0.05) improvement in the primary outcomes measured. The majority of the studies evaluated health care process measures as their primary outcomes (10/19). Evidence supports the effectiveness of CCDSS in the care of people with asthma. However there is very little information of its use in COPD care. Although there is considerable improvement in the health care process measures and clinical outcomes through the use of CCDSSs, its effects on user workload and efficiency, safety, costs of care, provider and patient satisfaction remain understudied.
Randomizer for High Data Rates
NASA Technical Reports Server (NTRS)
Garon, Howard; Sank, Victor J.
2018-01-01
NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.
NASA Astrophysics Data System (ADS)
Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun
2018-07-01
Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
Caperchione, Cristina M; Duncan, Mitch J; Rosenkranz, Richard R; Vandelanotte, Corneel; Van Itallie, Anetta K; Savage, Trevor N; Hooker, Cindy; Maeder, Anthony J; Mummery, W Kerry; Kolt, Gregory S
2016-04-15
To describe in detail the recruitment methods and enrollment rates, the screening methods, and the baseline characteristics of a sample of adults participating in the Walk 2.0 Study, an 18 month, 3-arm randomized controlled trial of a Web 2.0 based physical activity intervention. A two-fold recruitment plan was developed and implemented, including a direct mail-out to an extract from the Australian Electoral Commission electoral roll, and other supplementary methods including email and telephone. Physical activity screening involved two steps: a validated single-item self-report instrument and the follow-up Active Australia Questionnaire. Readiness for physical activity participation was also based on a two-step process of administering the Physical Activity Readiness Questionnaire and, where needed, further clearance from a medical practitioner. Across all recruitment methods, a total of 1244 participants expressed interest in participating, of which 656 were deemed eligible. Of these, 504 were later enrolled in the Walk 2.0 trial (77% enrollment rate) and randomized to the Walk 1.0 group (n = 165), the Walk 2.0 group (n = 168), or the Logbook group (n = 171). Mean age of the total sample was 50.8 years, with 65.2% female and 79.1% born in Australia. The results of this recruitment process demonstrate the successful use of multiple strategies to obtain a diverse sample of adults eligible to take part in a web-based physical activity promotion intervention. The use of dual screening processes ensured safe participation in the intervention. This approach to recruitment and physical activity screening can be used as a model for further trials in this area.
Assessing the significance of pedobarographic signals using random field theory.
Pataky, Todd C
2008-08-07
Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.
Wach, Achim; Dembowsky, Klaus; Dale, Glenn E
2018-04-01
Murepavadin is the first in class of the outer membrane protein-targeting antibiotics (OMPTA) and a pathogen-specific peptidomimetic antibacterial with a novel, nonlytic mechanism of action targeting Pseudomonas aeruginosa Murepavadin is being developed for the treatment of hospital-acquired bacterial pneumonia (HABP) and ventilator-associated bacterial pneumonia (VABP). The pharmacokinetics (PK) and safety of single and multiple doses of murepavadin were investigated in healthy male subjects. Part A of the study was a double-blind, randomized, placebo-controlled, single-ascending-dose investigation in 10 sequential cohorts where each cohort comprised 6 healthy male subjects; 4 subjects were randomized to murepavadin, and 2 subjects were randomized to placebo. Part B was a double-blind, randomized, placebo-controlled, multiple-ascending-dose investigation in 3 sequential cohorts. After a single dose of murepavadin, the geometric mean half-life (2.52 to 5.30 h), the total clearance (80.1 to 114 ml/h/kg), and the volume of distribution (415 to 724 ml/kg) were consistent across dose levels. The pharmacokinetics of the dosing regimens evaluated were dose proportional and linear. Murepavadin was well tolerated, adverse events were transient and generally mild, and no dose-limiting toxicity was identified. Copyright © 2018 American Society for Microbiology.
Effect of Processing Delay and Storage Conditions on Urine Albumin-to-Creatinine Ratio.
Herrington, William; Illingworth, Nicola; Staplin, Natalie; Kumar, Aishwarya; Storey, Ben; Hrusecka, Renata; Judge, Parminder; Mahmood, Maria; Parish, Sarah; Landray, Martin; Haynes, Richard; Baigent, Colin; Hill, Michael; Clark, Sarah
2016-10-07
Because there is substantial biologic intraindividual variation in albumin excretion, randomized trials of albuminuria-reducing therapies may need multiple urine samples to estimate daily urinary albumin excretion. Mailing spot urine samples could offer a convenient and cost-effective method to collect multiple samples, but urine albumin-to-creatinine ratio stability in samples stored at ambient temperatures for several days is unknown. Patients with kidney disease provided fresh urine samples in two tubes (with and without boric acid preservative). Reference aliquots from each participant were analyzed immediately, whereas remaining aliquots were subject to different handling/storage conditions before analysis, including delayed processing for up to 7 days at three different storage temperatures (4°C, 18°C, and 30°C), multiple freeze-thaw cycles, and long-term frozen storage at -80°C, -40°C, and -20°C. We calculated the mean percentage change in urine albumin-to-creatinine ratio for each condition, and we considered samples stable if the 95% confidence interval was within a ±5% threshold. Ninety-three patients provided samples with detectable albuminuria in the reference aliquot. Median (interquartile range) urine albumin-to-creatinine ratio was 87 (20-499) mg/g. The inclusion of preservative had minimal effect on fresh urine albumin-to-creatinine ratio measurements but reduced the changes in albumin and creatinine in samples subject to processing delay and storage conditions. The urine albumin-to-creatinine ratio was stable for 7 days in samples containing preservative at 4°C and 18°C and 2 days when stored at 30°C. It was also stable in samples with preservative after three freeze-thaw cycles and in frozen storage for 6 months at -80°C or -40°C but not at -20°C. Mailed urine samples collected with preservative and received within 7 days if ambient temperature is ≤18°C, or within 2 days if the temperature is higher but does not exceed 30°C, are suitable for the measurement of urine albumin-to-creatinine ratio in randomized trials. Preserved samples frozen to -40°C or -80°C for 6 months before analysis also seem suitable. Copyright © 2016 by the American Society of Nephrology.
Effect of Processing Delay and Storage Conditions on Urine Albumin-to-Creatinine Ratio
Illingworth, Nicola; Staplin, Natalie; Kumar, Aishwarya; Storey, Ben; Hrusecka, Renata; Judge, Parminder; Mahmood, Maria; Parish, Sarah; Landray, Martin; Haynes, Richard; Baigent, Colin; Hill, Michael; Clark, Sarah
2016-01-01
Background and objectives Because there is substantial biologic intraindividual variation in albumin excretion, randomized trials of albuminuria-reducing therapies may need multiple urine samples to estimate daily urinary albumin excretion. Mailing spot urine samples could offer a convenient and cost-effective method to collect multiple samples, but urine albumin-to-creatinine ratio stability in samples stored at ambient temperatures for several days is unknown. Design, setting, participants, & measurements Patients with kidney disease provided fresh urine samples in two tubes (with and without boric acid preservative). Reference aliquots from each participant were analyzed immediately, whereas remaining aliquots were subject to different handling/storage conditions before analysis, including delayed processing for up to 7 days at three different storage temperatures (4°C, 18°C, and 30°C), multiple freeze-thaw cycles, and long–term frozen storage at −80°C, −40°C, and −20°C. We calculated the mean percentage change in urine albumin-to-creatinine ratio for each condition, and we considered samples stable if the 95% confidence interval was within a ±5% threshold. Results Ninety-three patients provided samples with detectable albuminuria in the reference aliquot. Median (interquartile range) urine albumin-to-creatinine ratio was 87 (20–499) mg/g. The inclusion of preservative had minimal effect on fresh urine albumin-to-creatinine ratio measurements but reduced the changes in albumin and creatinine in samples subject to processing delay and storage conditions. The urine albumin-to-creatinine ratio was stable for 7 days in samples containing preservative at 4°C and 18°C and 2 days when stored at 30°C. It was also stable in samples with preservative after three freeze-thaw cycles and in frozen storage for 6 months at −80°C or −40°C but not at −20°C. Conclusions Mailed urine samples collected with preservative and received within 7 days if ambient temperature is ≤18°C, or within 2 days if the temperature is higher but does not exceed 30°C, are suitable for the measurement of urine albumin-to-creatinine ratio in randomized trials. Preserved samples frozen to −40°C or −80°C for 6 months before analysis also seem suitable. PMID:27654930
Efficient collective influence maximization in cascading processes with first-order transitions
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-01-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988
Jordan, Jennifer; McIntosh, Virginia V W; Carter, Frances A; Joyce, Peter R; Frampton, Christopher M A; Luty, Suzanne E; McKenzie, Janice M; Carter, Janet D; Bulik, Cynthia M
2017-08-01
Failure to complete treatment for anorexia nervosa (AN) is- common, clinically concerning but difficult to predict. This study examines whether therapy-related factors (patient-rated pretreatment credibility and early therapeutic alliance) predict subsequent premature termination of treatment (PTT) alongside self-transcendence (a previously identified clinical predictor) in women with AN. 56 women aged 17-40 years participating in a randomized outpatient psychotherapy trial for AN. Treatment completion was defined as attending 15/20 planned sessions. Measures were the Treatment Credibility, Temperament and Character Inventory, Vanderbilt Therapeutic Alliance Scale and the Vanderbilt Psychotherapy Process Scale. Statistics were univariate tests, correlations, and logistic regression. Treatment credibility and certain early patient and therapist alliance/process subscales predicted PTT. Lower self-transcendence and lower early process accounted for 33% of the variance in predicting PTT. Routine assessment of treatment credibility and early process (comprehensively assessed from multiple perspectives) may help clinicians reduce PTT thereby enhancing treatment outcomes. © 2017 Wiley Periodicals, Inc.
Efficient collective influence maximization in cascading processes with first-order transitions
NASA Astrophysics Data System (ADS)
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-03-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.
Multiple-Choice and Short-Answer Exam Performance in a College Classroom
ERIC Educational Resources Information Center
Funk, Steven C.; Dickson, K. Laurie
2011-01-01
The authors experimentally investigated the effects of multiple-choice and short-answer format exam items on exam performance in a college classroom. They randomly assigned 50 students to take a 10-item short-answer pretest or posttest on two 50-item multiple-choice exams in an introduction to personality course. Students performed significantly…
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki
2010-07-05
We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.
Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?
Czégel, Dániel; Palla, Gergely
2015-01-01
Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology. PMID:26657012
Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?
NASA Astrophysics Data System (ADS)
Czégel, Dániel; Palla, Gergely
2015-12-01
Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology.
Role of 3D animation in periodontal patient education: a randomized controlled trial.
Cleeren, Gertjan; Quirynen, Marc; Ozcelik, Onur; Teughels, Wim
2014-01-01
This randomized controlled parallel trial investigates the effect of 3D animation on the increase and recall of knowledge on periodontitis by patients with periodontitis. The effects of a 3D animation (3D animation group) were compared with narration and drawing (control group) for periodontal patient education. A total of 68 periodontitis patients were stratified according to educational level and then randomly allocated to control or 3D animation groups. All patients received: (1) a pre-test (baseline knowledge), (2) a patient education video (3D animation or control video), (3) a post-test (knowledge immediately after looking at the video), and (4) a follow-up test (knowledge recall after 2 weeks). Each test contained 10 multiple-choice questions. There was no significant difference in baseline knowledge. Patients receiving the 3D animations had significantly higher scores for both the post-test and the follow-up test, when compared with patients receiving sketch animations. 3D animations are more effective than real-time drawings for periodontal patient education in terms of knowledge recall. 3D animations may be a powerful tool for assisting in the information process. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?
Czégel, Dániel; Palla, Gergely
2015-12-10
Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology.
ERIC Educational Resources Information Center
Reed, Phil; Doughty, Adam H.
2005-01-01
Response rates under random-interval schedules are lower when a brief (500 ms) signal accompanies reinforcement than when there is no signal. The present study examined this signaled-reinforcement effect and its relation to resistance to change. In Experiment 1, rats responded on a multiple random-interval 60-s random-interval 60-s schedule, with…
Zambito, A; Bianchini, D; Gatti, D; Rossini, M; Adami, S; Viapiana, O
2007-11-01
Chronic low back pain due to multiple vertebral fractures is of difficult management. Electrical nerve stimulation is frequently used, but its efficacy has never been properly evaluated. In a randomized placebo-controlled clinical trial, we have shown that both interferential currents and horizontal therapy are more effective than placebo for functional. Multiple vertebral fractures almost invariably ensue in chronic low back pain that remains of difficult management. Electrical nerve stimulation is frequently used but its efficacy has never been properly evaluated. One hundred and fifteen women with chronic back pain due to previous multiple vertebral osteoporotic fractures (CBPMF) were randomly assigned to either interferential currents (IFT), horizontal therapy (HT) or sham HT administered for 30 minutes daily for 5 days per week for two weeks together with a standard exercise program. Efficacy assessment was obtained at baseline and at week 2, 6 and 14 and included a functional questionnaire (Backill), the standard visual analog scale (VAS) and the mean analgesic consumption. At week 2 a significant and similar improvement in both the VAS and Backill score was observed in the three groups. The two scores continued to improve in the two active groups with changes significantly (p < 0.001) greater than those observed in control patients at week 6 and 14. The use of analgesic medications improved only in the HT group. This randomized double-blind controlled study provides the first evidence that IFT and HT therapy are significantly effective in alleviating both pain and disability in patients with CBPMF.
Paul, Lorna; Coulter, Elaine H; Miller, Linda; McFadyen, Angus; Dorfman, Joe; Mattison, Paul George G
2014-09-01
To explore the effectiveness and participant experience of web-based physiotherapy for people moderately affected with Multiple Sclerosis (MS) and to provide data to establish the sample size required for a fully powered, definitive randomized controlled study. A randomized controlled pilot study. Rehabilitation centre and participants' homes. Thirty community dwelling adults moderately affected by MS (Expanded Disability Status Scale 5-6.5). Twelve weeks of individualised web-based physiotherapy completed twice per week or usual care (control). Online exercise diaries were monitored; participants were telephoned weekly by the physiotherapist and exercise programmes altered remotely by the physiotherapist as required. The following outcomes were completed at baseline and after 12 weeks; 25 Foot Walk, Berg Balance Scale, Timed Up and Go, Multiple Sclerosis Impact Scale, Leeds MS Quality of Life Scale, MS-Related Symptom Checklist and Hospital Anxiety and Depression Scale. The intervention group also completed a website evaluation questionnaire and interviews. Participants reported that website was easy to use, convenient, and motivating and would be happy to use in the future. There was no statistically significant difference in the primary outcome measure, the timed 25ft walk in the intervention group (P=0.170), or other secondary outcome measures, except the Multiple Sclerosis Impact Scale (P=0.048). Effect sizes were generally small to moderate. People with MS were very positive about web-based physiotherapy. The results suggested that 80 participants, 40 in each group, would be sufficient for a fully powered, definitive randomized controlled trial. © The Author(s) 2014.
Attention to multiple locations is limited by spatial working memory capacity.
Close, Alex; Sapir, Ayelet; Burnett, Katherine; d'Avossa, Giovanni
2014-08-21
What limits the ability to attend several locations simultaneously? There are two possibilities: Either attention cannot be divided without incurring a cost, or spatial memory is limited and observers forget which locations to monitor. We compared motion discrimination when attention was directed to one or multiple locations by briefly presented central cues. The cues were matched for the amount of spatial information they provided. Several random dot kinematograms (RDKs) followed the spatial cues; one of them contained task-relevant, coherent motion. When four RDKs were presented, discrimination accuracy was identical when one and two locations were indicated by equally informative cues. However, when six RDKs were presented, discrimination accuracy was higher following one rather than multiple location cues. We examined whether memory of the cued locations was diminished under these conditions. Recall of the cued locations was tested when participants attended the cued locations and when they did not attend the cued locations. Recall was inaccurate only when the cued locations were attended. Finally, visually marking the cued locations, following one and multiple location cues, equalized discrimination performance, suggesting that participants could attend multiple locations when they did not have to remember which ones to attend. We conclude that endogenously dividing attention between multiple locations is limited by inaccurate recall of the attended locations and that attention poses separate demands on the same central processes used to remember spatial information, even when the locations attended and those held in memory are the same. © 2014 ARVO.
Janssen, Alisha; Boster, Aaron; Lee, HyunKyu; Patterson, Beth; Prakash, Ruchika Shaurya
2015-01-01
Multiple sclerosis (MS) is a neurodegenerative disease of the central nervous system that results in diffuse nerve damage and associated physical and cognitive impairments. Of the few comprehensive rehabilitation options that exist for populations with lower baseline cognitive functioning, those that have been successful at eliciting broad cognitive improvements have focused on a multimodal training approach, emphasizing complex cognitive processing that utilizes multiple domains simultaneously. The current study sought to determine the feasibility of an 8-week, hybrid-variable priority training (HVT) program, with a secondary aim to assess the success of this training paradigm at eliciting broad cognitive transfer effects. Capitalizing on the multimodal training modalities offered by the Space Fortress platform, we compared the HVT strategy-based intervention with a waitlist control group, to primarily assess skill acquisition and secondarily determine presence of cognitive transfer. Twenty-eight participants met inclusionary criteria for the study and were randomized to either training or waitlist control groups. To assess broad transfer effects, a battery of neuropsychological tests was administered pre- and post-intervention. The results indicated an overall improvement in skill acquisition and evidence for the feasibility of the intervention, but a lack of broad transfer to tasks of cognitive functioning. Participants in the training group, however, did show improvements on a measure of spatial short-term memory. The current investigation provided support for the feasibility of a multimodal training approach, using the HVT strategy, within the MS population, but lacked broad transfer to multiple domains of cognitive functioning. Future improvements to obtain greater cognitive transfer efficacy would include a larger sample size, a longer course of training to evoke greater game score improvement, the inclusion of only cognitively impaired individuals, and integration of subjective measures of improvement in addition to objective tests of cognitive performance.
Esposito-Smythers, Christianne; Hadley, Wendy; Curby, Timothy W; Brown, Larry K
2017-02-01
Adolescents with mental health conditions represent a high-risk group for substance use, deliberate self-harm (DSH), and risky sexual behavior. Mental health treatment does not uniformly decrease these risks. Effective prevention efforts are needed to offset the developmental trajectory from mental health problems to these behaviors. This study tested an adjunctive cognitive-behavioral family-based alcohol, DSH, and HIV prevention program (ASH-P) for adolescents in mental healthcare. A two group randomized design was used to compare ASH-P to an assessment only control (AO-C). Participants included 81 adolescents and a parent. Assessments were completed at pre-intervention as well as 1, 6, and 12-months post-enrollment, and included measures of family-based mechanisms and high-risk behaviors. ASH-P relative to AO-C was associated with greater improvements in most family process variables (perceptions of communication and parental disapproval of alcohol use and sexual behavior) as well as less DSH and greater refusal of sex to avoid a sexually transmitted infection. It also had a moderate (but non-significant) effect on odds of binge drinking. No differences were found in suicidal ideation, alcohol use, or sexual intercourse. ASH-P showed initial promise in preventing multiple high-risk behaviors. Further testing of prevention protocols that target multiple high-risk behaviors in clinical samples is warranted. Copyright © 2016 Elsevier Ltd. All rights reserved.
Randolph, John J; Randolph, Jennifer S; Wishart, Heather A
2017-02-01
Individuals with multiple sclerosis (MS) often report cognitive dysfunction, although neuropsychological evaluation findings may not correlate with subjective concerns. One factor that may explain this lack of correspondence is the controlled testing environment, which differs from busier settings where cognitive lapses are noted to occur. This study used a novel environmental manipulation to determine whether individuals with MS who report cognitive dysfunction are more vulnerable to the effects of auditory distraction during neuropsychological testing. Twenty-four individuals with clinically definite MS or clinically isolated syndrome were administered a cognitive battery during two counterbalanced auditory conditions: quiet/standard condition, and distraction condition with random office background noise. Participants were divided into high versus low cognitive complaint groups using a median split analysis of Perceived Deficits Questionnaire responses. Participants with more cognitive complaints showed a decrement in performance on the oral Symbol Digit Modalities Test during the distraction condition while those with fewer cognitive complaints demonstrated stable performance across conditions. These findings remained significant after controlling for education, premorbid intellect, fatigue, and depressed mood. These results suggest that individuals with MS with more cognitive complaints are vulnerable to environmental distraction, particularly regarding processing speed. Incorporating random environmental noise or other distraction conditions during selected measures may enhance the ecological validity of neuropsychological evaluation results in MS. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Alsalaheen, Bara; Stockdale, Kayla; Pechumer, Dana; Giessing, Alexander; He, Xuming; Broglio, Steven P
It is unclear whether individuals with a history of single or multiple clinically recovered concussions exhibit worse cognitive performance on baseline testing compared with individuals with no concussion history. To analyze the effects of concussion history on baseline neurocognitive performance using a computerized neurocognitive test. PubMed, CINAHL, and psycINFO were searched in November 2015. The search was supplemented by a hand search of references. Studies were included if participants completed the Immediate Post-concussion Assessment and Cognitive Test (ImPACT) at baseline (ie, preseason) and if performance was stratified by previous history of single or multiple concussions. Systematic review and meta-analysis. Level 2. Sample size, demographic characteristics of participants, as well as performance of participants on verbal memory, visual memory, visual-motor processing speed, and reaction time were extracted from each study. A random-effects pooled meta-analysis revealed that, with the exception of worsened visual memory for those with 1 previous concussion (Hedges g = 0.10), no differences were observed between participants with 1 or multiple concussions compared with participants without previous concussions. With the exception of decreased visual memory based on history of 1 concussion, history of 1 or multiple concussions was not associated with worse baseline cognitive performance.
S-wave attenuation structure beneath the northern Izu-Bonin arc
NASA Astrophysics Data System (ADS)
Takahashi, Tsutomu; Obana, Koichiro; Kodaira, Shuichi
2016-04-01
To understand temperature structure or magma distribution in the crust and uppermost mantle, it is essential to know their attenuation structure. This study estimated the 3-D S-wave attenuation structure in the crust and uppermost mantle at the northern Izu-Bonin arc, taking into account the apparent attenuation due to multiple forward scattering. In the uppermost mantle, two areas of high seismic attenuation (high Q -1) imaged beneath the volcanic front were mostly colocated with low-velocity anomalies. This coincidence suggests that these high- Q -1 areas in low-velocity zones are the most likely candidates for high-temperature regions beneath volcanoes. The distribution of random inhomogeneities indicated the presence of three anomalies beneath the volcanic front: Two were in high- Q -1 areas but the third was in a moderate- Q -1 area, indicating a low correlation between random inhomogeneities and Q -1. All three anomalies of random inhomogeneities were rich in short-wavelength spectra. The most probable interpretation of such spectra is the presence of volcanic rock, which would be related to accumulated magma intrusion during episodes of volcanic activity. Therefore, the different distributions of Q -1 and random inhomogeneities imply that the positions of hot regions in the uppermost mantle beneath this arc have changed temporally; therefore, they may provide important constraints on the evolutionary processes of arc crust and volcanoes.
Gandolfi, Marialuisa; Geroin, Christian; Picelli, Alessandro; Munari, Daniele; Waldner, Andreas; Tamburin, Stefano; Marchioretto, Fabio; Smania, Nicola
2014-01-01
Background: Extensive research on both healthy subjects and patients with central nervous damage has elucidated a crucial role of postural adjustment reactions and central sensory integration processes in generating and “shaping” locomotor function, respectively. Whether robotic-assisted gait devices might improve these functions in Multiple sclerosis (MS) patients is not fully investigated in literature. Purpose: The aim of this study was to compare the effectiveness of end-effector robot-assisted gait training (RAGT) and sensory integration balance training (SIBT) in improving walking and balance performance in patients with MS. Methods: Twenty-two patients with MS (EDSS: 1.5–6.5) were randomly assigned to two groups. The RAGT group (n = 12) underwent end-effector system training. The SIBT group (n = 10) underwent specific balance exercises. Each patient received twelve 50-min treatment sessions (2 days/week). A blinded rater evaluated patients before and after treatment as well as 1 month post treatment. Primary outcomes were walking speed and Berg Balance Scale. Secondary outcomes were the Activities-specific Balance Confidence Scale, Sensory Organization Balance Test, Stabilometric Assessment, Fatigue Severity Scale, cadence, step length, single and double support time, Multiple Sclerosis Quality of Life-54. Results: Between groups comparisons showed no significant differences on primary and secondary outcome measures over time. Within group comparisons showed significant improvements in both groups on the Berg Balance Scale (P = 0.001). Changes approaching significance were found on gait speed (P = 0.07) only in the RAGT group. Significant changes in balance task-related domains during standing and walking conditions were found in the SIBT group. Conclusion: Balance disorders in patients with MS may be ameliorated by RAGT and by SIBT. PMID:24904361
NASA Technical Reports Server (NTRS)
Clare, L. P.; Yan, T.-Y.
1985-01-01
The analysis of the ALOHA random access protocol for communications channels with fading is presented. The protocol is modified to send multiple contiguous copies of a message at each transmission attempt. Both pure and slotted ALOHA channels are considered. A general two state model is used for the channel error process to account for the channel fading memory. It is shown that greater throughput and smaller delay may be achieved using repetitions. The model is applied to the analysis of the delay-throughput performance in a fading mobile communications environment. Numerical results are given for NASA's Mobile Satellite Experiment.
Microcontroller-based binary integrator for millimeter-wave radar experiments.
Eskelinen, Pekka; Ruoskanen, Jukka; Peltonen, Jouni
2010-05-01
An easily on-site reconfigurable multiple binary integrator for millimeter radar experiments has been constructed of static random access memories, an eight bit microcontroller, and high speed video operational amplifiers. The design uses a raw comparator path and two adjustable m-out-of-n chains in a wired-OR configuration. Standard high speed memories allow the use of pulse widths below 100 ns. For eight pulse repetition intervals it gives a maximum improvement of 6.6 dB for stationary low-level target echoes. The doubled configuration enhances the capability against fluctuating targets. Because of the raw comparator path, also single return pulses of relatively high amplitude are processed.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Dynamics of two competing species in the presence of Lévy noise sources.
La Cognata, A; Valenti, D; Dubkov, A A; Spagnolo, B
2010-07-01
We consider a Lotka-Volterra system of two competing species subject to multiplicative α-stable Lévy noise. The interaction parameter between the species is a random process which obeys a stochastic differential equation with a generalized bistable potential in the presence both of a periodic driving term and an additive α-stable Lévy noise. We study the species dynamics, which is characterized by two different regimes, exclusion of one species and coexistence of both. We find quasiperiodic oscillations and stochastic resonance phenomenon in the dynamics of the competing species, analyzing the role of the Lévy noise sources.
Dynamics of two competing species in the presence of Lévy noise sources
NASA Astrophysics Data System (ADS)
La Cognata, A.; Valenti, D.; Dubkov, A. A.; Spagnolo, B.
2010-07-01
We consider a Lotka-Volterra system of two competing species subject to multiplicative α -stable Lévy noise. The interaction parameter between the species is a random process which obeys a stochastic differential equation with a generalized bistable potential in the presence both of a periodic driving term and an additive α -stable Lévy noise. We study the species dynamics, which is characterized by two different regimes, exclusion of one species and coexistence of both. We find quasiperiodic oscillations and stochastic resonance phenomenon in the dynamics of the competing species, analyzing the role of the Lévy noise sources.
Memory interface simulator: A computer design aid
NASA Technical Reports Server (NTRS)
Taylor, D. S.; Williams, T.; Weatherbee, J. E.
1972-01-01
Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.
Chakraborty, Bibhas; Davidson, Karina W.
2015-01-01
Summary Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality. PMID:25354029
ERIC Educational Resources Information Center
Mayfield, Linda Riggs
2010-01-01
This study examined the effects of being taught the Mayfield's Four Questions multiple-choice test-taking strategy on the perceived self-efficacy and multiple-choice test scores of nursing students in a two-year associate degree program. Experimental and control groups were chosen by stratified random sampling. Subjects completed the 10-statement…
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
Rixen, Dieter; Steinhausen, Eva; Sauerland, Stefan; Lefering, Rolf; Maegele, Marc G; Bouillon, Bertil; Grass, Guido; Neugebauer, Edmund A M
2016-01-25
Long bone fractures, particularly of the femur, are common in multiple-trauma patients, but their optimal management has not yet been determined. Although a trend exists toward the concept of "damage control orthopedics" (DCO), current literature is inconclusive. Thus, a need exists for a more specific controlled clinical study. The primary objective of this study was to clarify whether a risk-adapted procedure for treating femoral fractures, as opposed to an early definitive treatment strategy, leads to an improved outcome (morbidity and mortality). The study was designed as a randomized controlled multicenter study. Multiple-trauma patients with femur shaft fractures and a calculated probability of death of 20 to 60 % were randomized to either temporary fracture fixation with external fixation and defined secondary definitive treatment (DCO) or primary reamed nailing (early total care). The primary objective was to reduce the extent of organ failure as measured by the maximum sepsis-related organ failure assessment (SOFA) score. Thirty-four patients were randomized to two groups of 17 patients each. Both groups were comparable regarding sex, age, injury severity score, Glasgow Coma Scale, prothrombin time, base excess, calculated probability of death, and other physiologic variables. The maximum SOFA score was comparable (nonsignificant) between the groups. Regarding the secondary endpoints, the patients with external fixation required a significantly longer ventilation period (p = 0.049) and stayed on the intensive care significantly longer (p = 0.037), whereas the in-hospital length of stay was balanced for both groups. Unfortunately, the study had to be terminated prior to reaching the anticipated sample size because of unexpected low patient recruitment. Thus, the results of this randomized study reflect the ambivalence in the literature. No advantage of the damage control concept could be detected in the treatment of femur fractures in multiple-trauma patients. The necessity for scientific evaluation of this clinically relevant question remains. Current Controlled Trials ISRCTN10321620 Date assigned: 9 February 2007.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Creating ensembles of decision trees through sampling
Kamath, Chandrika; Cantu-Paz, Erick
2005-08-30
A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.
1981-06-01
normality and several types of nonnormality. Overall the rank transformation procedure seems to be the best. The Fisher’s LSD multiple comparisons procedure...the rank transformation procedure appears to maintain power better than Fisher’s LSD or the randomization proce- dures. The conclusion of this study...best. The Fisher’s LSD multiple comparisons procedure in the one way and two way layouts iv compared with a randomization procedure and with the same
Multisource passive acoustic tracking: an application of random finite set data fusion
NASA Astrophysics Data System (ADS)
Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung
2010-04-01
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.
Shortreed, Susan M.; Moodie, Erica E. M.
2012-01-01
Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
Schaeffer, Christine; Teter, Caroline; Finch, Emily A; Hurt, Courtney; Keeter, Mary Kate; Liss, David T; Rogers, Angela; Sheth, Avani; Ackermann, Ronald
2018-02-01
Transitional care programs have been widely used to reduce readmissions and improve the quality and safety of the handoff process between hospital and outpatient providers. Very little is known about effective transitional care interventions among patients who are uninsured or with Medicaid. This paper describes the design and baseline characteristics of a pragmatic randomized comparative effectiveness trial of transitional care. Northwestern Medical Group- Transitional Care (NMG-TC) care model was developed to address the needs of patients with multiple medical problems that required lifestyle changes and were amenable to office-based management. We present the design, evaluation methods and baseline characteristics of NMG-TC trial patients. Baseline demographic characteristics indicate that our patient population is predominantly male, Medicaid insured and non-white. This study will evaluate two methods for implementing an effective transitional care model in a medically complex and socioeconomically diverse population. Copyright © 2017 Elsevier Inc. All rights reserved.
Li, Jianghong; Valente, Thomas W; Shin, Hee-Sung; Weeks, Margaret; Zelenev, Alexei; Moothi, Gayatri; Mosher, Heather; Heimer, Robert; Robles, Eduardo; Palmer, Greg; Obidoa, Chinekwu
2017-06-28
Intensive sociometric network data were collected from a typical respondent driven sample (RDS) of 528 people who inject drugs residing in Hartford, Connecticut in 2012-2013. This rich dataset enabled us to analyze a large number of unobserved network nodes and ties for the purpose of assessing common assumptions underlying RDS estimators. Results show that several assumptions central to RDS estimators, such as random selection, enrollment probability proportional to degree, and recruitment occurring over recruiter's network ties, were violated. These problems stem from an overly simplistic conceptualization of peer recruitment processes and dynamics. We found nearly half of participants were recruited via coupon redistribution on the street. Non-uniform patterns occurred in multiple recruitment stages related to both recruiter behavior (choosing and reaching alters, passing coupons, etc.) and recruit behavior (accepting/rejecting coupons, failing to enter study, passing coupons to others). Some factors associated with these patterns were also associated with HIV risk.
Reddy, Gaddum Duemani; Kelleher, Keith; Fink, Rudy; Saggau, Peter
2009-01-01
The dynamic ability of neuronal dendrites to shape and integrate synaptic responses is the hallmark of information processing in the brain. Effectively studying this phenomenon requires concurrent measurements at multiple sites on live neurons. Significant progress has been made by optical imaging systems which combine confocal and multiphoton microscopy with inertia-free laser scanning. However, all systems developed to date restrict fast imaging to two dimensions. This severely limits the extent to which neurons can be studied, since they represent complex three-dimensional (3D) structures. Here we present a novel imaging system that utilizes a unique arrangement of acousto-optic deflectors to steer a focused ultra-fast laser beam to arbitrary locations in 3D space without moving the objective lens. As we demonstrate, this highly versatile random-access multiphoton microscope supports functional imaging of complex 3D cellular structures such as neuronal dendrites or neural populations at acquisition rates on the order of tens of kilohertz. PMID:18432198
Le Gonidec, Yves; Gibert, Dominique
2006-11-01
We perform a multiscale analysis of the backscattering properties of a complex interface between water and a layer of randomly arranged glass beads with diameter D=1 mm. An acoustical experiment is done to record the wavelet response of the interface in a large frequency range from lambda/D=0.3 to lambda/D=15. The wavelet response is a physical analog of the mathematical wavelet transform which possesses nice properties to detect and characterize abrupt changes in signals. The experimental wavelet response allows to identify five frequency domains corresponding to different backscattering properties of the complex interface. This puts quantitative limits to the validity domains of the models used to represent the interface and which are flat elastic, flat visco-elastic, rough random half-space with multiple scattering, and rough elastic from long to short wavelengths respectively. A physical explanation based on Mie scattering theory is proposed to explain the origin of the five frequency domains identified in the wavelet response.
Miller, Lucy Jane; Schoen, Sarah A; James, Katherine; Schaaf, Roseann C
2007-01-01
The purpose of this pilot study was to prepare for a randomized controlled study of the effectiveness of occupational therapy using a sensory integration approach (OT-SI) with children who have sensory processing disorders (SPD). A one-group pretest, posttest design with 30 children was completed with a subset of children with SPD, those with sensory modulation disorder. Lessons learned relate to (a) identifying a homogeneous sample with quantifiable inclusion criteria, (b) developing an intervention manual for study replication and a fidelity to treatment measure, (c) determining which outcomes are sensitive to change and relate to parents' priorities, and (d) clarifying rigorous methodologies (e.g., blinded examiners, randomization, power). A comprehensive program of research is needed, including multiple pilot studies to develop enough knowledge that high-quality effectiveness research in occupational therapy can be completed. Previous effectiveness studies in OT-SI have been single projects not based on a unified long-term program of research.
Timóteo, Sérgio; Correia, Marta; Rodríguez-Echeverría, Susana; Freitas, Helena; Heleno, Ruben
2018-01-10
Species interaction networks are traditionally explored as discrete entities with well-defined spatial borders, an oversimplification likely impairing their applicability. Using a multilayer network approach, explicitly accounting for inter-habitat connectivity, we investigate the spatial structure of seed-dispersal networks across the Gorongosa National Park, Mozambique. We show that the overall seed-dispersal network is composed by spatially explicit communities of dispersers spanning across habitats, functionally linking the landscape mosaic. Inter-habitat connectivity determines spatial structure, which cannot be accurately described with standard monolayer approaches either splitting or merging habitats. Multilayer modularity cannot be predicted by null models randomizing either interactions within each habitat or those linking habitats; however, as habitat connectivity increases, random processes become more important for overall structure. The importance of dispersers for the overall network structure is captured by multilayer versatility but not by standard metrics. Highly versatile species disperse many plant species across multiple habitats, being critical to landscape functional cohesion.
Ritvo, Paul; Myers, Ronald E; Serenity, Mardie; Gupta, Samir; Inadomi, John M; Green, Beverly B; Jerant, Anthony; Tinmouth, Jill; Paszat, Lawrence; Pirbaglou, Meysam; Rabeneck, Linda
2017-08-01
To derive a taxonomy for colorectal cancer screening that advances Randomized Controlled Trials (RCTs) and screening uptake. Detailed publication review, multiple interviews with principal investigators (PIs) and collaboration with PIs as co-authors produced a CRCS intervention taxonomy. Semi-structured interview questions with PIs (Drs. Inadomi, Myers, Green, Gupta, Jerant and Ritvo) yielded details about trial conduct. Interview comparisons led to an iterative process informing serial interviews until a consensus was obtained on final taxonomy structure. These taxonomy headings (Engagement Sponsor, Population Targeted, Alternative Screening Tests, Delivery Methods, and Support for Test Performance (EPADS)) were used to compare studies. Exemplary insights emphasized: 1) direct test delivery to patients; 2) linguistic-ethnic matching of staff to minority subjects; and 3) authorization of navigators to schedule or refer for colonoscopies and/or distribute stool blood tests during screening promotion. PIs of key RCTs (2012-2015) derived a CRCS taxonomy useful in detailed examination of CRCS promotion and design of future RCTs. Copyright © 2017 Elsevier Inc. All rights reserved.
Fluctuation scaling in the visual cortex at threshold
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-05-01
Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.
SIG. Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J.; Lager, D.; Azevedo, S.
1992-01-22
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG; a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a `repeat` sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
SIG. Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J.; Lager, D.; Azevedo, S.
1992-01-22
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time-and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, Darrell; Azevado, Stephen
1986-06-01
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
SIG. Signal Processing, Analysis, & Display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J.; Lager, D.; Azevedo, S.
1992-01-22
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less
Rumor Processes in Random Environment on and on Galton-Watson Trees
NASA Astrophysics Data System (ADS)
Bertacchi, Daniela; Zucca, Fabio
2013-11-01
The aim of this paper is to study rumor processes in random environment. In a rumor process a signal starts from the stations of a fixed vertex (the root) and travels on a graph from vertex to vertex. We consider two rumor processes. In the firework process each station, when reached by the signal, transmits it up to a random distance. In the reverse firework process, on the other hand, stations do not send any signal but they “listen” for it up to a random distance. The first random environment that we consider is the deterministic 1-dimensional tree with a random number of stations on each vertex; in this case the root is the origin of . We give conditions for the survival/extinction on almost every realization of the sequence of stations. Later on, we study the processes on Galton-Watson trees with random number of stations on each vertex. We show that if the probability of survival is positive, then there is survival on almost every realization of the infinite tree such that there is at least one station at the root. We characterize the survival of the process in some cases and we give sufficient conditions for survival/extinction.
Santos, José Pedro; Fernández, Maria Jesús; Fontecha, José Luis; Matatagui, Daniel; Sayago, Isabel; Horrillo, Maria Carmen; Gracia, Isabel
2014-12-16
A new method of depositing tin dioxide nanofibers in order to develop chemical sensors is presented. It involves an electrospinning process with in-plane electrostatic focusing over micromechanized substrates. It is a fast and reproducible method. After an annealing process, which can be performed by the substrate heaters, it is observed that the fibers are intertwined forming porous networks that are randomly distributed on the substrate. The fiber diameters oscillate from 100 nm to 200 nm and fiber lengths reach several tens of microns. Each fiber has a polycrystalline structure with multiple nano-grains. The sensors have been tested for the detection of acetone and hydrogen peroxide (precursors of the explosive triacetone triperoxide, TATP) in air in the ppm range. High and fast responses to these gases have been obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S.; Alam, Maksudul
A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidiamore » GeForce 1080 GPU, cuPPA generates a scale free network of a billion edges in less than 2 seconds.« less
Kawai, Kosuke; Kupka, Roland; Mugusi, Ferdinand; Aboud, Said; Okuma, James; Villamor, Eduardo; Spiegelman, Donna; Fawzi, Wafaie W
2010-02-01
We previously reported that supplementation with multivitamins (vitamin B complex, vitamin C, and vitamin E) at multiples of the Recommended Dietary Allowance (RDA) significantly decreased the risk of adverse pregnancy outcomes among HIV-infected women. The minimum dosage of multivitamins necessary for optimal benefits is unknown. We investigated the efficacy of multivitamin supplements at single compared with multiple RDAs on decreasing the risk of adverse pregnancy outcomes among HIV-infected women. We conducted a double-blind, randomized controlled trial among 1129 HIV-infected pregnant women in Tanzania. Eligible women between 12 and 27 gestational weeks were randomly assigned to receive daily oral supplements of either single or multiple RDA multivitamins from enrollment until 6 wk after delivery. Multivitamins at multiple and single doses of the RDA had similar effects on the risk of low birth weight (11.6% and 10.2%, respectively; P = 0.75). We found no difference between the 2 groups in the risk of preterm birth (19.3% and 18.4%, respectively; P = 0.73) or small-for-gestational-age (14.8% and 12.0%, respectively; P = 0.18). The mean birth weights were similar in the multiple RDA (3045 + or - 549 g) and single RDA multivitamins group (3052 + or - 534 g; P = 0.83). There were no significant differences between the 2 groups in the risk of fetal death (P = 0.99) or early infant death (P = 0.19). Multivitamin supplements at a single dose of the RDA may be as efficacious as multiple doses of the RDA in decreasing the risk of adverse pregnancy outcomes among HIV-infected women. This trial was registered at clinicaltrials.gov as NCT00197678.
A multiple scattering theory for EM wave propagation in a dense random medium
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.; Wong, K. W.
1985-01-01
For a dense medium of randomly distributed scatterers an integral formulation for the total coherent field has been developed. This formulation accounts for the multiple scattering of electromagnetic waves including both the twoand three-particle terms. It is shown that under the Markovian assumption the total coherent field and the effective field have the same effective wave number. As an illustration of this theory, the effective wave number and the extinction coefficient are derived in terms of the polarizability tensor and the pair distribution function for randomly distributed small spherical scatterers. It is found that the contribution of the three-particle term increases with the particle size, the volume fraction, the frequency and the permittivity of the particle. This increase is more significant with frequency and particle size than with other parameters.
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
Heckman, James; Moon, Seong Hyeok; Pinto, Rodrigo; Savelyev, Peter; Yavitz, Adam
2012-01-01
Social experiments are powerful sources of information about the effectiveness of interventions. In practice, initial randomization plans are almost always compromised. Multiple hypotheses are frequently tested. “Significant” effects are often reported with p-values that do not account for preliminary screening from a large candidate pool of possible effects. This paper develops tools for analyzing data from experiments as they are actually implemented. We apply these tools to analyze the influential HighScope Perry Preschool Program. The Perry program was a social experiment that provided preschool education and home visits to disadvantaged children during their preschool years. It was evaluated by the method of random assignment. Both treatments and controls have been followed from age 3 through age 40. Previous analyses of the Perry data assume that the planned randomization protocol was implemented. In fact, as in many social experiments, the intended randomization protocol was compromised. Accounting for compromised randomization, multiple-hypothesis testing, and small sample sizes, we find statistically significant and economically important program effects for both males and females. We also examine the representativeness of the Perry study. PMID:23255883
What Does It Mean to Do Something Randomly?
ERIC Educational Resources Information Center
Liu, Yating; Enderson, Mary C.
2016-01-01
A mysterious conflict of solutions emerged when a group of tenth- and eleventh-grade students were studying a seemingly ordinary problem on combination and probability. By investigating the mysterious "conflicts" caused by multiple randomization procedures, students will gain a deeper understanding of what it means to perform a task…
Accounting for heterogeneity in meta-analysis using a multiplicative model-an empirical study.
Mawdsley, David; Higgins, Julian P T; Sutton, Alex J; Abrams, Keith R
2017-03-01
In meta-analysis, the random-effects model is often used to account for heterogeneity. The model assumes that heterogeneity has an additive effect on the variance of effect sizes. An alternative model, which assumes multiplicative heterogeneity, has been little used in the medical statistics community, but is widely used by particle physicists. In this paper, we compare the two models using a random sample of 448 meta-analyses drawn from the Cochrane Database of Systematic Reviews. In general, differences in goodness of fit are modest. The multiplicative model tends to give results that are closer to the null, with a narrower confidence interval. Both approaches make different assumptions about the outcome of the meta-analysis. In our opinion, the selection of the more appropriate model will often be guided by whether the multiplicative model's assumption of a single effect size is plausible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Entanglement entropy at infinite-randomness fixed points in higher dimensions.
Lin, Yu-Cheng; Iglói, Ferenc; Rieger, Heiko
2007-10-05
The entanglement entropy of the two-dimensional random transverse Ising model is studied with a numerical implementation of the strong-disorder renormalization group. The asymptotic behavior of the entropy per surface area diverges at, and only at, the quantum phase transition that is governed by an infinite-randomness fixed point. Here we identify a double-logarithmic multiplicative correction to the area law for the entanglement entropy. This contrasts with the pure area law valid at the infinite-randomness fixed point in the diluted transverse Ising model in higher dimensions.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Galati, Alexia; Avraamides, Marios N
2013-01-01
Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers' linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner's viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout's structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., "to your right") when the partner's viewpoint was misaligned by a small offset or coincided with the layout's structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones) to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in non-spatial tasks.
Galati, Alexia; Avraamides, Marios N.
2013-01-01
Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers' linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner's viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout's structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., “to your right”) when the partner's viewpoint was misaligned by a small offset or coincided with the layout's structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones) to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in non-spatial tasks. PMID:24133432
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Spatial Factors in the Integration of Speed Information
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)
1995-01-01
We reported that, for a 21FC task with multiple Gabor patches in each interval, thresholds for speed discrimination decreased with the number of patches, while simply increasing the area of a single patch produced no such effect. This result could be explained by multiple patches reducing spatial uncertainty. However, the fact that thresholds decrease with number even when the patches are in fixed positions argues against this explanation. We therefore performed additional experiments to explore the lack of an area effect. Three observers did a 21FC speed discrimination task with 6 Gabor patches in each interval, and were asked to pick the interval in which the gratings moved faster. The 50% contrast patches were placed on a circle at 4 deg. eccentricity, either equally spaced and maximally separated (hexagonal array), or closely-spaced, in consecutive positions (string of pearls). For the string-of-pearls condition, the grating phases were either random, or consistent with a full-field grating viewed through multiple Gaussian windows. When grating phases were random, the thresholds for the hexagonal and string-of-pearls layouts were indistinguishable. For the string-of-pearls layout, thresholds in the consistent-phase condition were higher by 15 +/- 6% than in the random-phase condition. (Thresholds increased by 57 +/- 7% in going from 6 patches to a single patch of equivalent area.). For random-phase patches, the lower thresholds for 6 patches does not depend on a specific spacing or spatial layout. Multiple, closely-spaced, consistent-phase patches that can be interpreted as a single grating, result in thresholds closer to that produced by a single patch. Together, our results suggest that object segmentation may play a role in the integration of speed information.
Super-resolution photoacoustic microscopy using joint sparsity
NASA Astrophysics Data System (ADS)
Burgholzer, P.; Haltmeier, M.; Berer, T.; Leiss-Holzinger, E.; Murray, T. W.
2017-07-01
We present an imaging method that uses the random optical speckle patterns that naturally emerge as light propagates through strongly scattering media as a structured illumination source for photoacoustic imaging. Our approach, termed blind structured illumination photoacoustic microscopy (BSIPAM), was inspired by recent work in fluorescence microscopy where super-resolution imaging was demonstrated using multiple unknown speckle illumination patterns. We extend this concept to the multiple scattering domain using photoacoustics (PA), with the speckle pattern serving to generate ultrasound. The optical speckle pattern that emerges as light propagates through diffuse media provides structured illumination to an object placed behind a scattering wall. The photoacoustic signal produced by such illumination is detected using a focused ultrasound transducer. We demonstrate through both simulation and experiment, that by acquiring multiple photoacoustic images, each produced by a different random and unknown speckle pattern, an image of an absorbing object can be reconstructed with a spatial resolution far exceeding that of the ultrasound transducer. We experimentally and numerically demonstrate a gain in resolution of more than a factor of two by using multiple speckle illuminations. The variations in the photoacoustic signals generated with random speckle patterns are utilized in BSIPAM using a novel reconstruction algorithm. Exploiting joint sparsity, this algorithm is capable of reconstructing the absorbing structure from measured PA signals with a resolution close to the speckle size. Another way to excite random excitation for photoacoustic imaging are small absorbing particles, including contrast agents, which flow through small vessels. For such a set-up, the joint-sparsity is generated by the fact that all the particles move in the same vessels. Structured illumination in that case is not necessary.
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
van Kessel, Kirsten; Wouldes, Trecia; Moss-Morris, Rona
2016-05-01
To pilot and compare the efficacy of an internet-based cognitive behavioural therapy self-management programme with (MSInvigor8-Plus) and without (MSInvigor8-Only) the use of email support in reducing fatigue severity and impact (primary outcomes), and depressed and anxious mood (secondary outcomes). Randomized controlled trial using an independent randomization system built into the website and intention-to-treat analysis. Participants were recruited through the local Multiple Sclerosis Society and hospital neurological services in New Zealand. A total of 39 people (aged 31-63 years), experiencing multiple sclerosis fatigue, able to walk with and without walking aids, were randomized to MSInvigor8-Only (n = 20) or to MSInvigor8-Plus (n = 19). MSInvigor8 is an eight-session programme based on cognitive behaviour therapy principles including psycho-education, self-monitoring, and changing unhelpful activity and thought patterns. Outcome measures included fatigue severity (Chalder Fatigue Scale) and impact (Modified Fatigue Impact Scale), and anxiety and depression (Hospital Anxiety and Depression Scale). Assessments were performed at baseline and at 10 weeks. The MSInvigor8-Plus condition resulted in significantly greater reductions in fatigue severity (F [1,36] = 9.09, p < 0.01) and impact (F [1,36] = 6.03, p < 0.02) compared with the MSInvigor8-Only condition. Large between-group effect sizes for fatigue severity (d = 0.99) and fatigue impact (d = 0.81) were obtained. No significant differences were found between the groups on changes in anxiety and depression. MSInvigor8 delivered with email-based support is a potentially promising, acceptable, and cost-effective approach to treating fatigue in people with multiple sclerosis in New Zealand. © The Author(s) 2015.
Sahakyan, Aleksandr B; Balasubramanian, Shankar
2016-03-12
The role of random mutations and genetic errors in defining the etiology of cancer and other multigenic diseases has recently received much attention. With the view that complex genes should be particularly vulnerable to such events, here we explore the link between the simple properties of the human genes, such as transcript length, number of splice variants, exon/intron composition, and their involvement in the pathways linked to cancer and other multigenic diseases. We reveal a substantial enrichment of cancer pathways with long genes and genes that have multiple splice variants. Although the latter two factors are interdependent, we show that the overall gene length and splicing complexity increase in cancer pathways in a partially decoupled manner. Our systematic survey for the pathways enriched with top lengthy genes and with genes that have multiple splice variants reveal, along with cancer pathways, the pathways involved in various neuronal processes, cardiomyopathies and type II diabetes. We outline a correlation between the gene length and the number of somatic mutations. Our work is a step forward in the assessment of the role of simple gene characteristics in cancer and a wider range of multigenic diseases. We demonstrate a significant accumulation of long genes and genes with multiple splice variants in pathways of multigenic diseases that have already been associated with de novo mutations. Unlike the cancer pathways, we note that the pathways of neuronal processes, cardiomyopathies and type II diabetes contain genes long enough for topoisomerase-dependent gene expression to also be a potential contributing factor in the emergence of pathologies, should topoisomerases become impaired.
Laurora, Irene; Wang, Yuan
2016-10-01
Extended-release (ER) naproxen sodium provides pain relief for up to 24 hours with a single dose (660 mg/day). Its pharmacokinetic profile after single and multiple dosing was compared to immediate release (IR) naproxen sodium in two randomized, open-label, crossover studies, under fasting and fed conditions. Eligible healthy subjects were randomized to ER naproxen sodium 660-mg tablet once daily or IR naproxen sodium 220-mg tablet twice daily (440 mg initially, followed by 220 mg 12 hours later). Primary variables: pharmacokinetic parameters after singleday administration (day 1) and at steady state after multiple-day administration (day 6). Total exposure was comparable for both treatments under fasting and fed conditions. After fasting: peak naproxen concentrations were slightly lower with ER naproxen sodium than with IR naproxen sodium but were reached at a similar time. Fed conditions: mean peak concentrations were comparable but reached after a longer time with ER vs. IR naproxen sodium. ER naproxen sodium was well tolerated, with a similar safety profile to IR naproxen sodium. The total exposure of ER naproxen sodium (660 mg) is comparable to IR naproxen sodium (220 mg) when administered at the maximum over the counter (OTC) dose of 660-mg daily dose on a single day and over multiple days. The rate of absorption is delayed under fed conditions.
On Edge Exchangeable Random Graphs
NASA Astrophysics Data System (ADS)
Janson, Svante
2017-06-01
We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).
NASA Astrophysics Data System (ADS)
Rana, Dipankar; Gangopadhyay, Gautam
2003-01-01
We have analyzed the energy transfer process in a dendrimer supermolecule using a classical random walk model and an Eyring model of membrane permeation. Here the energy transfer is considered as a multiple barrier crossing process by thermal hopping on the backbone of a cayley tree. It is shown that the mean residence time and mean first passage time, which involve explicit local escape rates, depend upon the temperature, size of the molecule, core branching, and the nature of the potential energy landscape along the cayley tree architecture. The effect of branching tries to create a uniform distribution of mean residence time over the generations and the distribution depends upon the interplay of funneling and local rates of transitions. The calculation of flux at the steady state from the Eyring model also gives a useful idea about the rate when the dendrimeric system is considered as an open system where the core is absorbing the transported energy like a photosynthetic reaction center and a continuous supply of external energy is maintained at the peripheral nodes. The effect of the above parameters of the system are shown to depend on the steady-state flux that has a qualitative resemblence with the result of the mean first passage time approach.
A mathematical study of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.
Comparison of several maneuvering target tracking models
NASA Astrophysics Data System (ADS)
McIntyre, Gregory A.; Hintz, Kenneth J.
1998-07-01
The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.
Identification of Novel Growth Regulators in Plant Populations Expressing Random Peptides1[OPEN
Bao, Zhilong; Clancy, Maureen A.
2017-01-01
The use of chemical genomics approaches allows the identification of small molecules that integrate into biological systems, thereby changing discrete processes that influence growth, development, or metabolism. Libraries of chemicals are applied to living systems, and changes in phenotype are observed, potentially leading to the identification of new growth regulators. This work describes an approach that is the nexus of chemical genomics and synthetic biology. Here, each plant in an extensive population synthesizes a unique small peptide arising from a transgene composed of a randomized nucleic acid sequence core flanked by translational start, stop, and cysteine-encoding (for disulfide cyclization) sequences. Ten and 16 amino acid sequences, bearing a core of six and 12 random amino acids, have been synthesized in Arabidopsis (Arabidopsis thaliana) plants. Populations were screened for phenotypes from the seedling stage through senescence. Dozens of phenotypes were observed in over 2,000 plants analyzed. Ten conspicuous phenotypes were verified through separate transformation and analysis of multiple independent lines. The results indicate that these populations contain sequences that often influence discrete aspects of plant biology. Novel peptides that affect photosynthesis, flowering, and red light response are described. The challenge now is to identify the mechanistic integrations of these peptides into biochemical processes. These populations serve as a new tool to identify small molecules that modulate discrete plant functions that could be produced later in transgenic plants or potentially applied exogenously to impart their effects. These findings could usher in a new generation of agricultural growth regulators, herbicides, or defense compounds. PMID:28807931
Reporting of analyses from randomized controlled trials with multiple arms: a systematic review.
Baron, Gabriel; Perrodeau, Elodie; Boutron, Isabelle; Ravaud, Philippe
2013-03-27
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms. The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form. In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned. Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
Efficient quantum computing using coherent photon conversion.
Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A
2011-10-12
Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting systems with extremely strong intrinsic nonlinearities. Furthermore, exploiting higher-order nonlinearities with multiple pump fields yields a mechanism for multiparty mediation of the complex, coherent dynamics.
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
Nanotip Carpets as Antireflection Surfaces
NASA Technical Reports Server (NTRS)
Bae, Youngsam; Mobasser, Sohrab; Manohara, Harish; Lee, Choonsup
2008-01-01
Carpet-like random arrays of metal-coated silicon nanotips have been shown to be effective as antireflection surfaces. Now undergoing development for incorporation into Sun sensors that would provide guidance for robotic exploratory vehicles on Mars, nanotip carpets of this type could also have many uses on Earth as antireflection surfaces in instruments that handle or detect ultraviolet, visible, or infrared light. In the original Sun-sensor application, what is required is an array of 50-micron-diameter apertures on what is otherwise an opaque, minimally reflective surface, as needed to implement a miniature multiple-pinhole camera. The process for fabrication of an antireflection nanotip carpet for this application (see Figure 1) includes, and goes somewhat beyond, the process described in A New Process for Fabricating Random Silicon Nanotips (NPO-40123), NASA Tech Briefs, Vol. 28, No. 1 (November 2004), page 62. In the first step, which is not part of the previously reported process, photolithography is performed to deposit etch masks to define the 50-micron apertures on a silicon substrate. In the second step, which is part of the previously reported process, the non-masked silicon area between the apertures is subjected to reactive ion etching (RIE) under a special combination of conditions that results in the growth of fluorine-based compounds in randomly distributed formations, known in the art as "polymer RIE grass," that have dimensions of the order of microns. The polymer RIE grass formations serve as microscopic etch masks during the next step, in which deep reactive ion etching (DRIE) is performed. What remains after DRIE is the carpet of nano - tips, which are high-aspect-ratio peaks, the tips of which have radii of the order of nanometers. Next, the nanotip array is evaporatively coated with Cr/Au to enhance the absorption of light (more specifically, infrared light in the Sun-sensor application). The photoresist etch masks protecting the apertures are then removed by dipping the substrate into acetone. Finally, for the Sun-sensor application, the back surface of the substrate is coated with a 57-nm-thick layer of Cr for attenuation of sunlight.
Stochastic description of geometric phase for polarized waves in random media
NASA Astrophysics Data System (ADS)
Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent
2013-01-01
We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.
Loss of MACF1 Abolishes Ciliogenesis and Disrupts Apicobasal Polarity Establishment in the Retina.
May-Simera, Helen L; Gumerson, Jessica D; Gao, Chun; Campos, Maria; Cologna, Stephanie M; Beyer, Tina; Boldt, Karsten; Kaya, Koray D; Patel, Nisha; Kretschmer, Friedrich; Kelley, Matthew W; Petralia, Ronald S; Davey, Megan G; Li, Tiansen
2016-10-25
Microtubule actin crosslinking factor 1 (MACF1) plays a role in the coordination of microtubules and actin in multiple cellular processes. Here, we show that MACF1 is also critical for ciliogenesis in multiple cell types. Ablation of Macf1 in the developing retina abolishes ciliogenesis, and basal bodies fail to dock to ciliary vesicles or migrate apically. Photoreceptor polarity is randomized, while inner retinal cells laminate correctly, suggesting that photoreceptor maturation is guided by polarity cues provided by cilia. Deletion of MACF1 in adult photoreceptors causes reversal of basal body docking and loss of outer segments, reflecting a continuous requirement for MACF1 function. MACF1 also interacts with the ciliary proteins MKKS and TALPID3. We propose that a disruption of trafficking across microtubles to actin filaments underlies the ciliogenesis defect in cells lacking MACF1 and that MKKS and TALPID3 are involved in the coordination of microtubule and actin interactions. Published by Elsevier Inc.
Bernstein, R; Jenkins, T; Dawson, B; Wagner, J; Dewald, G; Koo, G C; Wachtel, S S
1980-01-01
A mentally retarded female child with multiple congenital abnormalities had an abnormal X chromosome and a Y chromosome; the karyotype was interpreted as 46,dup(X)(p21 leads to pter)Y. Prenatal chromosome studies in a later pregnancy indicated the same chromosomal abnormality in the fetus. The fetus and proband had normal female genitalia and ovarian tissue. H--Y antigen was virtually absent in both sibs, a finding consistent with the view that testis-determining genes of the Y chromosome may be suppressed by regulatory elements of the X. The abnormal X chromosome was present in the mother, the maternal grandmother, and a female sib: all were phenotypically normal and showed the karyotype 46,Xdup(X)(p21 leads to pter) with non-random inactivation of the abnormal X. Anomalous segregation of the Xga allele suggests that the Xg locus was involved in the inactivation process or that crossing-over at meiosis occurred. Images PMID:7193738
Quantitative analysis of multiple sclerosis: a feasibility study
NASA Astrophysics Data System (ADS)
Li, Lihong; Li, Xiang; Wei, Xinzhou; Sturm, Deborah; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Multiple Sclerosis (MS) is an inflammatory and demyelinating disorder of the central nervous system with a presumed immune-mediated etiology. For treatment of MS, the measurements of white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) are often used in conjunction with clinical evaluation to provide a more objective measure of MS burden. In this paper, we apply a new unifying automatic mixture-based algorithm for segmentation of brain tissues to quantitatively analyze MS. The method takes into account the following effects that commonly appear in MR imaging: 1) The MR data is modeled as a stochastic process with an inherent inhomogeneity effect of smoothly varying intensity; 2) A new partial volume (PV) model is built in establishing the maximum a posterior (MAP) segmentation scheme; 3) Noise artifacts are minimized by a priori Markov random field (MRF) penalty indicating neighborhood correlation from tissue mixture. The volumes of brain tissues (WM, GM) and CSF are extracted from the mixture-based segmentation. Experimental results of feasibility studies on quantitative analysis of MS are presented.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
2016-01-01
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment. PMID:27898712
Range data description based on multiple characteristics
NASA Technical Reports Server (NTRS)
Al-Hujazi, Ezzet; Sood, Arun
1988-01-01
An algorithm for describing range images based on Mean curvature (H) and Gaussian curvature (K) is presented. Range images are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The curvature parameters are derived from the fundamental theorems of differential geometry and provides visible invariant pixel labels that can be used to characterize the scene. The sign of H and K can be used to classify each pixel into one of eight possible surface types. Due to the sensitivity of these parameters to noise the resulting HK-sing map does not directly identify surfaces in the range images and must be further processed. A region growing algorithm based on modeling the scene points with a Markov Random Field (MRF) of variable neighborhood size and edge models is suggested. This approach allows the integration of information from multiple characteristics in an efficient way. The performance of the proposed algorithm on a number of synthetic and real range images is discussed.
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond.
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
2016-01-01
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment.
Case Study: An Ethics Case Study of HIV Prevention Research on Facebook: The Just/Us Study
Breslin, Lindsey T.; Wright, Erin E.; Black, Sandra R.; Levine, Deborah; Santelli, John S.
2011-01-01
Objective To consider issues related to research with youth on social networking sites online. Methods Description of the data collection process from 1,588 participants in a randomized controlled trial testing the efficacy of HIV prevention education delivered on Facebook. Using respondent-driven sampling, staff-recruited participants are encouraged to recruit up to three friends to enroll in the study. Results Researchers should (a) consider whether an online social networking site is an appropriate place to implement a research study; (b) offer opportunities to review informed consent documents at multiple times and in multiple locations throughout the study; and (c) collect data outside the social networking site and store it behind secure firewalls to ensure it will not be accessible to any person on the social networking site. Conclusions Online social networks are growing in popularity. Conducting research on social media sites requires deliberate attention to consent, confidentiality, and security. PMID:21292724
Structural disconnection is responsible for increased functional connectivity in multiple sclerosis.
Patel, Kevin R; Tobyne, Sean; Porter, Daria; Bireley, John Daniel; Smith, Victoria; Klawiter, Eric
2018-06-01
Increased synchrony within neuroanatomical networks is often observed in neurophysiologic studies of human brain disease. Most often, this phenomenon is ascribed to a compensatory process in the face of injury, though evidence supporting such accounts is limited. Given the known dependence of resting-state functional connectivity (rsFC) on underlying structural connectivity (SC), we examine an alternative hypothesis: that topographical changes in SC, specifically particular patterns of disconnection, contribute to increased network rsFC. We obtain measures of rsFC using fMRI and SC using probabilistic tractography in 50 healthy and 28 multiple sclerosis subjects. Using a computational model of neuronal dynamics, we simulate BOLD using healthy subject SC to couple regions. We find that altering the model by introducing structural disconnection patterns observed in those multiple sclerosis subjects with high network rsFC generates simulations with high rsFC as well, suggesting that disconnection itself plays a role in producing high network functional connectivity. We then examine SC data in individuals. In multiple sclerosis subjects with high network rsFC, we find a preferential disconnection between the relevant network and wider system. We examine the significance of such network isolation by introducing random disconnection into the model. As observed empirically, simulated network rsFC increases with removal of connections bridging a community with the remainder of the brain. We thus show that structural disconnection known to occur in multiple sclerosis contributes to network rsFC changes in multiple sclerosis and further that community isolation is responsible for elevated network functional connectivity.
Trefzer, Axel; Jungmann, Volker; Molnár, István; Botejue, Ajit; Buckel, Dagmar; Frey, Gerhard; Hill, D. Steven; Jörg, Mario; Ligon, James M.; Mason, Dylan; Moore, David; Pachlatko, J. Paul; Richardson, Toby H.; Spangenberg, Petra; Wall, Mark A.; Zirkle, Ross; Stege, Justin T.
2007-01-01
Discovery of the CYP107Z subfamily of cytochrome P450 oxidases (CYPs) led to an alternative biocatalytic synthesis of 4″-oxo-avermectin, a key intermediate for the commercial production of the semisynthetic insecticide emamectin. However, under industrial process conditions, these wild-type CYPs showed lower yields due to side product formation. Molecular evolution employing GeneReassembly was used to improve the regiospecificity of these enzymes by a combination of random mutagenesis, protein structure-guided site-directed mutagenesis, and recombination of multiple natural and synthetic CYP107Z gene fragments. To assess the specificity of CYP mutants, a miniaturized, whole-cell biocatalytic reaction system that allowed high-throughput screening of large numbers of variants was developed. In an iterative process consisting of four successive rounds of GeneReassembly evolution, enzyme variants with significantly improved specificity for the production of 4″-oxo-avermectin were identified; these variants could be employed for a more economical industrial biocatalytic process to manufacture emamectin. PMID:17483257
Interarrival times of message propagation on directed networks.
Mihaljev, Tamara; de Arcangelis, Lucilla; Herrmann, Hans J
2011-08-01
One of the challenges in fighting cybercrime is to understand the dynamics of message propagation on botnets, networks of infected computers used to send viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We map this problem to the propagation of multiple random walkers on directed networks and we evaluate the interarrival time distribution between successive walkers arriving at a target. We show that the temporal organization of this process, which models information propagation on unstructured peer to peer networks, has the same features as SPAM reaching a single user. We study the behavior of the message interarrival time distribution on three different network topologies using two different rules for sending messages. In all networks the propagation is not a pure Poisson process. It shows universal features on Poissonian networks and a more complex behavior on scale free networks. Results open the possibility to indirectly learn about the process of sending messages on networks with unknown topologies, by studying interarrival times at any node of the network.
Interarrival times of message propagation on directed networks
NASA Astrophysics Data System (ADS)
Mihaljev, Tamara; de Arcangelis, Lucilla; Herrmann, Hans J.
2011-08-01
One of the challenges in fighting cybercrime is to understand the dynamics of message propagation on botnets, networks of infected computers used to send viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We map this problem to the propagation of multiple random walkers on directed networks and we evaluate the interarrival time distribution between successive walkers arriving at a target. We show that the temporal organization of this process, which models information propagation on unstructured peer to peer networks, has the same features as SPAM reaching a single user. We study the behavior of the message interarrival time distribution on three different network topologies using two different rules for sending messages. In all networks the propagation is not a pure Poisson process. It shows universal features on Poissonian networks and a more complex behavior on scale free networks. Results open the possibility to indirectly learn about the process of sending messages on networks with unknown topologies, by studying interarrival times at any node of the network.
Multicomponent Supramolecular Systems: Self-Organization in Coordination-Driven Self-Assembly
Zheng, Yao-Rong; Yang, Hai-Bo; Ghosh, Koushik; Zhao, Liang; Stang, Peter J.
2009-01-01
The self-organization of multicomponent supramolecular systems involving a variety of two-dimensional (2-D) polygons and three-dimensional (3-D) cages is presented. Nine self-organizing systems, SS1–SS9, have been studied. Each involving the simultaneous mixing of organoplatinum acceptors and pyridyl donors of varying geometry and their selective self-assembly into three to four specific 2-D (rectangular, triangular, and rhomboid) and/or 3-D (triangular prism and distorted and nondistorted trigonal bipyramidal) supramolecules. The formation of these discrete structures is characterized using NMR spectroscopy and electrospray ionization mass spectrometry (ESI-MS). In all cases, the self-organization process is directed by: (1) the geometric information encoded within the molecular subunits and (2) a thermodynamically driven dynamic self-correction process. The result is the selective self-assembly of multiple discrete products from a randomly formed complex. The influence of key experimental variables – temperature and solvent – on the self-correction process and the fidelity of the resulting self-organization systems is also described. PMID:19544512
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less
Restricted random search method based on taboo search in the multiple minima problem
NASA Astrophysics Data System (ADS)
Hong, Seung Do; Jhon, Mu Shik
1997-03-01
The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
Generalized and synthetic regression estimators for randomized branch sampling
David L. R. Affleck; Timothy G. Gregoire
2015-01-01
In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...
Random Assignment and Informed Consent: A Case Study of Multiple Perspectives
ERIC Educational Resources Information Center
Walker, Robert; Hoggart, Lesley; Hamilton, Gayle
2008-01-01
Although random assignment is generally the preferred methodology in impact evaluations, it raises numerous ethical concerns, some of which are addressed by securing participants' informed consent. However, there has been little investigation of how consent is obtained in social experiments and the amount of information that can be conveyed--and…
Supplemental Reading Strategy Instruction for Adolescents: A Randomized Trial and Follow-up Study
ERIC Educational Resources Information Center
Cantrell, Susan Chambers; Almasi, Janice F.; Rintamaa, Margaret; Carter, Janis C.
2016-01-01
In this study, the authors examine the impact of a yearlong supplemental reading course involving daily instruction in the learning strategies curriculum on lower achieving adolescent students' reading achievement and motivation. Using a multiple-cohort randomized treatment-control group design over 4 years, they compared achievement and…
Common Language Effect Size for Multiple Treatment Comparisons
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2015-01-01
Researchers who need to explain treatment effects to laypeople can translate Cohen's effect size (standardized mean difference) to a common language effect size--a probability of a random observation from one population being larger than a random observation from the other population. This common language effect size can be extended to represent…
How to select among available options for the treatment of multiple myeloma.
Harousseau, J L
2012-09-01
The introduction of novel agents (thalidomide, bortezomib and lenalidomide) in the frontline therapy of multiple myeloma has markedly improved the outcome both in younger patients who are candidates for high-dose therapy plus autologous stem-cell transplantation (HDT/ASCT) and in elderly patients. In the HDT/ASCT paradigm, novel agents may be used as induction therapy or after HDT/ASCT as consolidation and/or maintenance therapy. It is now possible to achieve up to 70% complete plus very good partial remission after HDT/ASCT and 70% 3-year progression-free survival (PFS). However long-term non-intensive therapy may also yield high response rates and prolonged PFS. Randomized trials comparing these two strategies are underway. In elderly patients, six randomized studies show the benefit of adding thalidomide to melphalan-prednisone (MP). a large randomized trial has also shown that the combination of bortezomib-MP is superior to MP for all parameters measuring the response and outcome. Finally, the role of maintenance is currently evaluated and a randomized trial shows that low-dose lenalidomide maintenance prolongs PFS.
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Response Strength in Extreme Multiple Schedules
ERIC Educational Resources Information Center
McLean, Anthony P.; Grace, Randolph C.; Nevin, John A.
2012-01-01
Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess…
A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons
ERIC Educational Resources Information Center
Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.
2016-01-01
Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…
"L"-Bivariate and "L"-Multivariate Association Coefficients. Research Report. ETS RR-08-40
ERIC Educational Resources Information Center
Kong, Nan; Lewis, Charles
2008-01-01
Given a system of multiple random variables, a new measure called the "L"-multivariate association coefficient is defined using (conditional) entropy. Unlike traditional correlation measures, the L-multivariate association coefficient measures the multiassociations or multirelations among the multiple variables in the given system; that…
Khalili, Mohammad; Eghtesadi, Shahryar; Mirshafiey, Abbas; Eskandari, Ghazaleh; Sanoobar, Meisam; Sahraian, Mohamad Ali; Motevalian, Abbas; Norouzi, Abbas; Moftakhar, Shirin; Azimi, Amirreza
2014-01-01
Multiple sclerosis is a neurodegenerative and demyelinating disease of central nervous system. High levels of oxidative stress are associated with inflammation and play an important role in pathogenesis of multiple sclerosis. This double-blind, randomized controlled clinical study was carried out to determine the effect of daily consumption of lipoic acid on oxidative stress among multiple sclerosis patients. A total of 52 relapsing-remitting multiple sclerosis patients, aged 18-50 years with Expanded Disability Status Scale ≤5.5 were assigned to consume either lipoic acid (1200 mg/day) or placebo capsules for 12 weeks. Fasting blood samples were collected before the first dose taken and 12 hours after the last. Dietary intakes were obtained by using 3-day dietary records. Consumption of lipoic acid resulted in a significant improvement of total antioxidant capacity (TAC) in comparison to the placebo group (P = 0.004). Although a significant change of TAC (-1511 mmol/L, P = 0.001) was found within lipoic acid group, other markers of oxidative stress including superoxide dismutase activity, glutathione peroxidase activity, and malondialdehyde levels were not affected by lipoic acid consumption. These results suggest that 1200 mg of lipoic acid improves serum TAC among multiple sclerosis patients but does not affect other markers of oxidative stress.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra
2016-08-05
In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.
Zhou, L; Qu, Z G; Ding, T; Miao, J Y
2016-04-01
The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.
NASA Astrophysics Data System (ADS)
Zhou, L.; Qu, Z. G.; Ding, T.; Miao, J. Y.
2016-04-01
The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.
The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees
NASA Astrophysics Data System (ADS)
Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard
2017-09-01
Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.
Paltamaa, Jaana; Sjögren, Tuulikki; Peurala, Sinikka H; Heinonen, Ari
2012-10-01
To determine the effects of physiotherapy interventions on balance in people with multiple sclerosis. A systematic literature search was conducted in Medline, Cinahl, Embase, PEDro, both electronically and by manual search up to March 2011. Randomized controlled trials of physiotherapy interventions in people with multiple sclerosis, with an outcome measure linked to the International Classification of Functioning, Disability and Health (ICF) category of "Changing and maintaining body position", were included. The quality of studies was determined by the van Tulder criteria. Meta-analyses were performed in subgroups according to the intervention. After screening 233 full-text papers, 11 studies were included in a qualitative analysis and 7 in a meta-analysis. The methodological quality of the studies ranged from poor to moderate. Low evidence was found for the efficacy of specific balance exercises, physical therapy based on an individualized problem-solving approach, and resistance and aerobic exercises on improving balance among ambulatory people with multiple sclerosis. These findings indicate small, but significant, effects of physiotherapy on balance in people with multiple sclerosis who have a mild to moderate level of disability. However, evidence for severely disabled people is lacking, and further research is needed.
Gravitational lensing by an ensemble of isothermal galaxies
NASA Technical Reports Server (NTRS)
Katz, Neal; Paczynski, Bohdan
1987-01-01
Calculation of 28,000 models of gravitational lensing of a distant quasar by an ensemble of randomly placed galaxies, each having a singular isothermal mass distribuiton, is reported. The average surface mass density was 0.2 of the critical value in all models. It is found that the surface mass density averaged over the area of the smallest circle that encompasses the multiple images is 0.82, only slightly smaller than expected from a simple analytical model of Turner et al. (1984). The probability of getting multiple images is also as large as expected analytically. Gravitational lensing is dominated by the matter in the beam; i.e., by the beam convergence. The cases where the multiple imaging is due to asymmetry in mass distribution (i.e., due to shear) are very rare. Therefore, the observed gravitational-lens candidates for which no lensing object has been detected between the images cannot be a result of asymmetric mass distribution outside the images, at least in a model with randomly distributed galaxies. A surprisingly large number of large separations between the multiple images is found: up to 25 percent of multiple images have their angular separation 2 to 4 times larger than expected in a simple analytical model.
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
Signal processor for processing ultrasonic receiver signals
Fasching, George E.
1980-01-01
A signal processor is provided which uses an analog integrating circuit in conjunction with a set of digital counters controlled by a precision clock for sampling timing to provide an improved presentation of an ultrasonic transmitter/receiver signal. The signal is sampled relative to the transmitter trigger signal timing at precise times, the selected number of samples are integrated and the integrated samples are transferred and held for recording on a strip chart recorder or converted to digital form for storage. By integrating multiple samples taken at precisely the same time with respect to the trigger for the ultrasonic transmitter, random noise, which is contained in the ultrasonic receiver signal, is reduced relative to the desired useful signal.
Self-organisation of random oscillators with Lévy stable distributions
NASA Astrophysics Data System (ADS)
Moradi, Sara; Anderson, Johan
2017-08-01
A novel possibility of self-organized behaviour of stochastically driven oscillators is presented. It is shown that synchronization by Lévy stable processes is significantly more efficient than that by oscillators with Gaussian statistics. The impact of outlier events from the tail of the distribution function was examined by artificially introducing a few additional oscillators with very strong coupling strengths and it is found that remarkably even one such rare and extreme event may govern the long term behaviour of the coupled system. In addition to the multiplicative noise component, we have investigated the impact of an external additive Lévy distributed noise component on the synchronisation properties of the oscillators.
Seizure Forecasting and the Preictal State in Canine Epilepsy.
Varatharajah, Yogatheesan; Iyer, Ravishankar K; Berry, Brent M; Worrell, Gregory A; Brinkmann, Benjamin H
2017-02-01
The ability to predict seizures may enable patients with epilepsy to better manage their medications and activities, potentially reducing side effects and improving quality of life. Forecasting epileptic seizures remains a challenging problem, but machine learning methods using intracranial electroencephalographic (iEEG) measures have shown promise. A machine-learning-based pipeline was developed to process iEEG recordings and generate seizure warnings. Results support the ability to forecast seizures at rates greater than a Poisson random predictor for all feature sets and machine learning algorithms tested. In addition, subject-specific neurophysiological changes in multiple features are reported preceding lead seizures, providing evidence supporting the existence of a distinct and identifiable preictal state.
SEIZURE FORECASTING AND THE PREICTAL STATE IN CANINE EPILEPSY
Varatharajah, Yogatheesan; Iyer, Ravishankar K.; Berry, Brent M.; Worrell, Gregory A.; Brinkmann, Benjamin H.
2017-01-01
The ability to predict seizures may enable patients with epilepsy to better manage their medications and activities, potentially reducing side effects and improving quality of life. Forecasting epileptic seizures remains a challenging problem, but machine learning methods using intracranial electroencephalographic (iEEG) measures have shown promise. A machine-learning-based pipeline was developed to process iEEG recordings and generate seizure warnings. Results support the ability to forecast seizures at rates greater than a Poisson random predictor for all feature sets and machine learning algorithms tested. In addition, subject-specific neurophysiological changes in multiple features are reported preceding lead seizures, providing evidence supporting the existence of a distinct and identifiable preictal state. PMID:27464854
Stochastic resonance and noise delayed extinction in a model of two competing species
NASA Astrophysics Data System (ADS)
Valenti, D.; Fiasconaro, A.; Spagnolo, B.
2004-01-01
We study the role of the noise in the dynamics of two competing species. We consider generalized Lotka-Volterra equations in the presence of a multiplicative noise, which models the interaction between the species and the environment. The interaction parameter between the species is a random process which obeys a stochastic differential equation with a generalized bistable potential in the presence of a periodic driving term, which accounts for the environment temperature variation. We find noise-induced periodic oscillations of the species concentrations and stochastic resonance phenomenon. We find also a nonmonotonic behavior of the mean extinction time of one of the two competing species as a function of the additive noise intensity.
Complete Nagy-Soper subtraction for next-to-leading order calculations in QCD
NASA Astrophysics Data System (ADS)
Bevilacqua, G.; Czakon, M.; Kubocz, M.; Worek, M.
2013-10-01
We extend the Helac-Dipoles package with the implementation of a new subtraction formalism, first introduced by Nagy and Soper in the formulation of an improved parton shower. We discuss a systematic, semi-numerical approach for the evaluation of the integrated subtraction terms for both massless and massive partons, which provides the missing ingredient for a complete implementation. In consequence, the new scheme can now be used as part of a complete NLO QCD calculation for processes with arbitrary parton masses and multiplicities. We assess its overall performance through a detailed comparison with results based on Catani-Seymour subtraction. The importance of random polarization and color sampling of the external partons is also examined.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Random Walks in a One-Dimensional Lévy Random Environment
NASA Astrophysics Data System (ADS)
Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena
2016-04-01
We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.
Diversity of multilayer networks and its impact on collaborating epidemics
NASA Astrophysics Data System (ADS)
Min, Yong; Hu, Jiaren; Wang, Weihong; Ge, Ying; Chang, Jie; Jin, Xiaogang
2014-12-01
Interacting epidemics on diverse multilayer networks are increasingly important in modeling and analyzing the diffusion processes of real complex systems. A viral agent spreading on one layer of a multilayer network can interact with its counterparts by promoting (cooperative interaction), suppressing (competitive interaction), or inducing (collaborating interaction) its diffusion on other layers. Collaborating interaction displays different patterns: (i) random collaboration, where intralayer or interlayer induction has the same probability; (ii) concentrating collaboration, where consecutive intralayer induction is guaranteed with a probability of 1; and (iii) cascading collaboration, where consecutive intralayer induction is banned with a probability of 0. In this paper, we develop a top-bottom framework that uses only two distributions, the overlaid degree distribution and edge-type distribution, to model collaborating epidemics on multilayer networks. We then state the response of three collaborating patterns to structural diversity (evenness and difference of network layers). For viral agents with small transmissibility, we find that random collaboration is more effective in networks with higher diversity (high evenness and difference), while the concentrating pattern is more suitable in uneven networks. Interestingly, the cascading pattern requires a network with moderate difference and high evenness, and the moderately uneven coupling of multiple network layers can effectively increase robustness to resist cascading failure. With large transmissibility, however, we find that all collaborating patterns are more effective in high-diversity networks. Our work provides a systemic analysis of collaborating epidemics on multilayer networks. The results enhance our understanding of biotic and informative diffusion through multiple vectors.
Multiwavelength ytterbium-Brillouin random Rayleigh feedback fiber laser
NASA Astrophysics Data System (ADS)
Wu, Han; Wang, Zinan; Fan, Mengqiu; Li, Jiaqi; Meng, Qingyang; Xu, Dangpeng; Rao, Yunjiang
2018-03-01
In this letter, we experimentally demonstrate the multiwavelength ytterbium-Brillouin random fiber laser for the first time, in the half-open cavity formed by a fiber loop mirror and randomly distributed Rayleigh mirrors. With a cladding-pumped ytterbium-doped fiber and a long TrueWave fiber, the narrow linewidth Brillouin pump can generate multiple Brillouin Stokes lines with hybrid ytterbium-Brillouin gain. Up to six stable channels with a spacing of about 0.06 nm are obtained. This work extends the operation wavelength of the multiwavelength Brillouin random fiber laser to the 1 µm band, and has potential in various applications.
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
Creativity in art and science: are there two cultures?
Andreasen, Nancy C.
2012-01-01
The study of creativity is characterized by a variety of key questions, such as the nature of the creative process, whether there are multiple types of creativity, the relationship between high levels of creativity (“Big C”) and everyday creativity (“little c”), and the neural basis of creativity. Herein we examine the question of the relationship between creativity in the arts and the sciences, and use functional magnetic resonance imaging to explore the neural basis of creativity in a group of “Big C” individuals from both domains using a word association protocol. The findings give no support for the notion that the artists and scientists represent “two cultures. ” Rather, they suggest that very gifted artists and scientists have association cortices that respond in similar ways. Both groups display a preponderance of activation in brain circuits involved in higher-order socioaffective processing and Random Episodic Silent Thought /the default mode. PMID:22577304
Emerging and recurrent issues in drug development.
Anello, C
This paper reviews several emerging and recurrent issues relating to the drug development process. These emerging issues include changes to the FDA regulatory environment, internationalization of drug development, advances in computer technology and visualization tools, and efforts to incorporate meta-analysis methodology. Recurrent issues include: renewed interest in statistical methods for handling subgroups in the design and analysis of clinical trials; renewed interest in alternatives to the 'intention-to-treat' analysis in the presence of non-compliance in randomized clinical trials; renewed interest in methodology to address the multiplicities resulting from a variety of sources inherent in the drug development process, and renewed interest in methods to assure data integrity. These emerging and recurrent issues provide a continuing challenge to the international community of statisticians involved in drug development. Moreover, the involvement of statisticians with different perspectives continues to enrich the field and contributes to improvement in the public health.
Complete hazard ranking to analyze right-censored data: An ALS survival study.
Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang
2017-12-01
Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.
Rockfall travel distances theoretical distributions
NASA Astrophysics Data System (ADS)
Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea
2017-04-01
The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
An extended car-following model considering random safety distance with different probabilities
NASA Astrophysics Data System (ADS)
Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi
2018-02-01
Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.
Controlling the motion of multiple objects on a Chladni plate
NASA Astrophysics Data System (ADS)
Zhou, Quan; Sariola, Veikko; Latifi, Kourosh; Liimatainen, Ville
2016-09-01
The origin of the idea of moving objects by acoustic vibration can be traced back to 1787, when Ernst Chladni reported the first detailed studies on the aggregation of sand onto nodal lines of a vibrating plate. Since then and to this date, the prevailing view has been that the particle motion out of nodal lines is random, implying uncontrollability. But how random really is the out-of-nodal-lines motion on a Chladni plate? Here we show that the motion is sufficiently regular to be statistically modelled, predicted and controlled. By playing carefully selected musical notes, we can control the position of multiple objects simultaneously and independently using a single acoustic actuator. Our method allows independent trajectory following, pattern transformation and sorting of multiple miniature objects in a wide range of materials, including electronic components, water droplets loaded on solid carriers, plant seeds, candy balls and metal parts.
Gottfredson, Nisha C; Sterba, Sonya K; Jackson, Kristina M
2017-01-01
Random coefficient-dependent (RCD) missingness is a non-ignorable mechanism through which missing data can arise in longitudinal designs. RCD, for which we cannot test, is a problematic form of missingness that occurs if subject-specific random effects correlate with propensity for missingness or dropout. Particularly when covariate missingness is a problem, investigators typically handle missing longitudinal data by using single-level multiple imputation procedures implemented with long-format data, which ignores within-person dependency entirely, or implemented with wide-format (i.e., multivariate) data, which ignores some aspects of within-person dependency. When either of these standard approaches to handling missing longitudinal data is used, RCD missingness leads to parameter bias and incorrect inference. We explain why multilevel multiple imputation (MMI) should alleviate bias induced by a RCD missing data mechanism under conditions that contribute to stronger determinacy of random coefficients. We evaluate our hypothesis with a simulation study. Three design factors are considered: intraclass correlation (ICC; ranging from .25 to .75), number of waves (ranging from 4 to 8), and percent of missing data (ranging from 20 to 50%). We find that MMI greatly outperforms the single-level wide-format (multivariate) method for imputation under a RCD mechanism. For the MMI analyses, bias was most alleviated when the ICC is high, there were more waves of data, and when there was less missing data. Practical recommendations for handling longitudinal missing data are suggested.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm
Bossard, Jeremy A.; Lin, Lan; Werner, Douglas H.
2016-01-01
Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as ‘chaotic’, but we propose that apparent ‘chaotic’ natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too ‘perfect’ to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the ‘chaotic’ (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and ‘chaotic’ superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335
A generic minimization random allocation and blinding system on web.
Cai, Hongwei; Xia, Jielai; Xu, Dezhong; Gao, Donghuai; Yan, Yongping
2006-12-01
Minimization is a dynamic randomization method for clinical trials. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the validity of conventional analyses and its complexity in implementation. However, both the statistical and clinical validity of minimization were demonstrated in recent studies. Minimization random allocation system integrated with blinding function that could facilitate the implementation of this method in general clinical trials has not been reported. SYSTEM OVERVIEW: The system is a web-based random allocation system using Pocock and Simon minimization method. It also supports multiple treatment arms within a trial, multiple simultaneous trials, and blinding without further programming. This system was constructed with generic database schema design method, Pocock and Simon minimization method and blinding method. It was coded with Microsoft Visual Basic and Active Server Pages (ASP) programming languages. And all dataset were managed with a Microsoft SQL Server database. Some critical programming codes were also provided. SIMULATIONS AND RESULTS: Two clinical trials were simulated simultaneously to test the system's applicability. Not only balanced groups but also blinded allocation results were achieved in both trials. Practical considerations for minimization method, the benefits, general applicability and drawbacks of the technique implemented in this system are discussed. Promising features of the proposed system are also summarized.
NASA Astrophysics Data System (ADS)
Guex, Guillaume
2016-05-01
In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
NASA Astrophysics Data System (ADS)
Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.
2007-09-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.
Charging of multiple interacting particles by contact electrification.
Soh, Siowling; Liu, Helena; Cademartiri, Rebecca; Yoon, Hyo Jae; Whitesides, George M
2014-09-24
Many processes involve the movement of a disordered collection of small particles (e.g., powders, grain, dust, and granular foods). These particles move chaotically, interact randomly among themselves, and gain electrical charge by contact electrification. Understanding the mechanisms of contact electrification of multiple interacting particles has been challenging, in part due to the complex movement and interactions of the particles. To examine the processes contributing to contact electrification at the level of single particles, a system was constructed in which an array of millimeter-sized polymeric beads of different materials were agitated on a dish. The dish was filled almost completely with beads, such that beads did not exchange positions. At the same time, during agitation, there was sufficient space for collisions with neighboring beads. The charge of the beads was measured individually after agitation. Results of systematic variations in the organization and composition of the interacting beads showed that three mechanisms determined the steady-state charge of the beads: (i) contact electrification (charging of beads of different materials), (ii) contact de-electrification (discharging of beads of the same charge polarity to the atmosphere), and (iii) a long-range influence across beads not in contact with one another (occurring, plausibly, by diffusion of charge from a bead with a higher charge to a bead with a lower charge of the same polarity).
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2018-04-01
This paper examines bulk arrival and batch service queueing system with functioning server failure and multiple vacations. Customers are arriving into the system in bulk according to Poisson process with rate λ. Arriving customers are served in batches with minimum of ‘a’ and maximum of ‘b’ number of customers according to general bulk service rule. In the service completion epoch if the queue length is less than ‘a’ then the server leaves for vacation (secondary job) of random length. After a vacation completion, if the queue length is still less than ‘a’ then the server leaves for another vacation. The server keeps on going vacation until the queue length reaches the value ‘a’. The server is not stable at all the times. Sometimes it may fails during functioning of customers. Though the server fails service process will not be interrupted.It will be continued for the current batch of customers with lower service rate than the regular service rate. The server will be repaired after the service completion with lower service rate. The probability generating function of the queue size at an arbitrary time epoch will be obtained for the modelled queueing system by using supplementary variable technique. Moreover various performance characteristics will also be derived with suitable numerical illustrations.
Kazubke, Edda; Schüttpelz-Brauns, Katrin
2010-01-01
Background: Multiple choice questions (MCQs) are often used in exams of medical education and need careful quality management for example by the application of review committees. This study investigates whether groups communicating virtually by email are similar to face-to-face groups concerning their review process performance and whether a facilitator has positive effects. Methods: 16 small groups of students were examined, which had to evaluate and correct MCQs under four different conditions. In the second part of the investigation the changed questions were given to a new random sample for the judgement of the item quality. Results: There was no significant influence of the variables “form of review committee” and “facilitation”. However, face-to-face and virtual groups clearly differed in the required treatment times. The test condition “face to face without facilitation” was generally valued most positively concerning taking over responsibility, approach to work, sense of well-being, motivation and concentration on the task. Discussion: Face-to-face and virtual groups are equally effective in the review of MCQs but differ concerning their efficiency. The application of electronic review seems to be possible but is hardly recommendable because of the long process time and technical problems. PMID:21818213
Supervision of Facilitators in a Multisite Study: Goals, Process, and Outcomes
2010-01-01
Objective To describe the aims, implementation, and desired outcomes of facilitator supervision for both interventions (treatment and control) in Project Eban and to present the Eban Theoretical Framework for Supervision that guided the facilitators’ supervision. The qualifications and training of supervisors and facilitators are also described. Design This article provides a detailed description of supervision in a multisite behavioral intervention trial. The Eban Theoretical Framework for Supervision is guided by 3 theories: cognitive behavior therapy, the Life-long Model of Supervision, and “Empowering supervisees to empower others: a culturally responsive supervision model.” Methods Supervision is based on the Eban Theoretical Framework for Supervision, which provides guidelines for implementing both interventions using goals, process, and outcomes. Results Because of effective supervision, the interventions were implemented with fidelity to the protocol and were standard across the multiple sites. Conclusions Supervision of facilitators is a crucial aspect of multisite intervention research quality assurance. It provides them with expert advice, optimizes the effectiveness of facilitators, and increases adherence to the protocol across multiple sites. Based on the experience in this trial, some of the challenges that arise when conducting a multisite randomized control trial and how they can be handled by implementing the Eban Theoretical Framework for Supervision are described. PMID:18724192
Deep Learning Role in Early Diagnosis of Prostate Cancer
Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman
2018-01-01
The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518
Coherent optical pulse sequencer for quantum applications.
Hosseini, Mahdi; Sparkes, Ben M; Hétet, Gabriel; Longdell, Jevon J; Lam, Ping Koy; Buchler, Ben C
2009-09-10
The bandwidth and versatility of optical devices have revolutionized information technology systems and communication networks. Precise and arbitrary control of an optical field that preserves optical coherence is an important requisite for many proposed photonic technologies. For quantum information applications, a device that allows storage and on-demand retrieval of arbitrary quantum states of light would form an ideal quantum optical memory. Recently, significant progress has been made in implementing atomic quantum memories using electromagnetically induced transparency, photon echo spectroscopy, off-resonance Raman spectroscopy and other atom-light interaction processes. Single-photon and bright-optical-field storage with quantum states have both been successfully demonstrated. Here we present a coherent optical memory based on photon echoes induced through controlled reversible inhomogeneous broadening. Our scheme allows storage of multiple pulses of light within a chosen frequency bandwidth, and stored pulses can be recalled in arbitrary order with any chosen delay between each recalled pulse. Furthermore, pulses can be time-compressed, time-stretched or split into multiple smaller pulses and recalled in several pieces at chosen times. Although our experimental results are so far limited to classical light pulses, our technique should enable the construction of an optical random-access memory for time-bin quantum information, and have potential applications in quantum information processing.
Ernst, Alexandra; Sourty, Marion; Roquet, Daniel; Noblet, Vincent; Gounot, Daniel; Blanc, Frédéric; de Seze, Jérôme; Manning, Liliann
2016-10-09
While the efficacy of mental visual imagery (MVI) to alleviate autobiographical memory (AM) impairment in multiple sclerosis (MS) patients has been documented, nothing is known about the brain changes sustaining that improvement. To explore this issue, 20 relapsing-remitting MS patients showing AM impairment were randomly assigned to two groups, experimental (n = 10), who underwent the MVI programme, and control (n = 10), who followed a sham verbal programme. Besides the stringent AM assessment, the patients underwent structural and functional MRI sessions, consisting in retrieving personal memories, within a pre-/post-facilitation study design. Only the experimental group showed a significant AM improvement in post-facilitation, accompanied by changes in brain activation (medial and lateral frontal regions), functional connectivity (posterior brain regions), and grey matter volume (parahippocampal gyrus). Minor activations and functional connectivity changes were observed in the control group. The MVI programme improved AM in MS patients leading to functional and structural changes reflecting (1) an increase reliance on brain regions sustaining a self-referential process; (2) a decrease of those reflecting an effortful research process; and (3) better use of neural resources in brain regions sustaining MVI. Functional changes reported in the control group likely reflected ineffective attempts to use the sham strategy in AM.
A qualitative assessment of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
The effect of dissipative inhomogeneous medium on the statistics of the wave intensity
NASA Technical Reports Server (NTRS)
Saatchi, Sasan S.
1993-01-01
One of the main theoretical points in the theory of wave propagation in random medium is the derivation of closed form equations to describe the statistics of the propagating waves. In particular, in one dimensional problems, the closed form representation of the multiple scattering effects is important since it contributes in understanding such problems like wave localization, backscattering enhancement, and intensity fluctuations. In this the propagation of plane waves in a layer of one-dimensional dissipative random medium is considered. The medium is modeled by a complex permittivity whose real part is a constant representing the absorption. The one dimensional problem is mathematically equivalent to the analysis of a transmission line with randomly perturbed distributed parameters and a single mode lossy waveguide and the results can be used to study the propagation of radio waves through atmosphere and the remote sensing of geophysical media. It is assumed the scattering medium consists of an ensemble of one-dimensional point scatterers randomly positioned in a layer of thickness L with diffuse boundaries. A Poisson impulse process with density lambda is used to model the position of scatterers in the medium. By employing the Markov properties of this process an exact closed form equation of Kolmogorov-Feller type was obtained for the probability density of the reflection coefficient. This equation was solved by combining two limiting cases: (1) when the density of scatterers is small; and (2) when the medium is weakly dissipative. A two variable perturbation method for small lambda was used to obtain solutions valid for thick layers. These solutions are then asymptotically evaluated for small dissipation. To show the effect of dissipation, the mean and fluctuations of the reflected power are obtained. The results were compared with a lossy homogeneous medium and with a lossless inhomogeneous medium and the regions where the effect of absorption is not essential were discussed.
Li, Ginny X H; Vogel, Christine; Choi, Hyungwon
2018-06-07
While tandem mass spectrometry can detect post-translational modifications (PTM) at the proteome scale, reported PTM sites are often incomplete and include false positives. Computational approaches can complement these datasets by additional predictions, but most available tools use prediction models pre-trained for single PTM type by the developers and it remains a difficult task to perform large-scale batch prediction for multiple PTMs with flexible user control, including the choice of training data. We developed an R package called PTMscape which predicts PTM sites across the proteome based on a unified and comprehensive set of descriptors of the physico-chemical microenvironment of modified sites, with additional downstream analysis modules to test enrichment of individual or pairs of PTMs in protein domains. PTMscape is flexible in the ability to process any major modifications, such as phosphorylation and ubiquitination, while achieving the sensitivity and specificity comparable to single-PTM methods and outperforming other multi-PTM tools. Applying this framework, we expanded proteome-wide coverage of five major PTMs affecting different residues by prediction, especially for lysine and arginine modifications. Using a combination of experimentally acquired sites (PSP) and newly predicted sites, we discovered that the crosstalk among multiple PTMs occur more frequently than by random chance in key protein domains such as histone, protein kinase, and RNA recognition motifs, spanning various biological processes such as RNA processing, DNA damage response, signal transduction, and regulation of cell cycle. These results provide a proteome-scale analysis of crosstalk among major PTMs and can be easily extended to other types of PTM.
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
Nan, Zhufen; Chi, Xuefen
2016-12-20
The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.
2012-01-01
Background To demonstrate the use of risk-benefit analysis for comparing multiple competing interventions in the absence of randomized trials, we applied this approach to the evaluation of five anticoagulants to prevent thrombosis in patients undergoing orthopedic surgery. Methods Using a cost-effectiveness approach from a clinical perspective (i.e. risk benefit analysis) we compared thromboprophylaxis with warfarin, low molecular weight heparin, unfractionated heparin, fondaparinux or ximelagatran in patients undergoing major orthopedic surgery, with sub-analyses according to surgery type. Proportions and variances of events defining risk (major bleeding) and benefit (thrombosis averted) were obtained through a meta-analysis and used to define beta distributions. Monte Carlo simulations were conducted and used to calculate incremental risks, benefits, and risk-benefit ratios. Finally, net clinical benefit was calculated for all replications across a range of risk-benefit acceptability thresholds, with a reference range obtained by estimating the case fatality rate - ratio of thrombosis to bleeding. Results The analysis showed that compared to placebo ximelagatran was superior to other options but final results were influenced by type of surgery, since ximelagatran was superior in total knee replacement but not in total hip replacement. Conclusions Using simulation and economic techniques we demonstrate a method that allows comparing multiple competing interventions in the absence of randomized trials with multiple arms by determining the option with the best risk-benefit profile. It can be helpful in clinical decision making since it incorporates risk, benefit, and personal risk acceptance. PMID:22233221
Design and protocol of a randomized multiple behavior change trial: Make Better Choices 2 (MBC2).
Pellegrini, Christine A; Steglitz, Jeremy; Johnston, Winter; Warnick, Jennifer; Adams, Tiara; McFadden, H G; Siddique, Juned; Hedeker, Donald; Spring, Bonnie
2015-03-01
Suboptimal diet and inactive lifestyle are among the most prevalent preventable causes of premature death. Interventions that target multiple behaviors are potentially efficient; however the optimal way to initiate and maintain multiple health behavior changes is unknown. The Make Better Choices 2 (MBC2) trial aims to examine whether sustained healthful diet and activity change are best achieved by targeting diet and activity behaviors simultaneously or sequentially. Study design approximately 250 inactive adults with poor quality diet will be randomized to 3 conditions examining the best way to prescribe healthy diet and activity change. The 3 intervention conditions prescribe: 1) an increase in fruit and vegetable consumption (F/V+), decrease in sedentary leisure screen time (Sed-), and increase in physical activity (PA+) simultaneously (Simultaneous); 2) F/V+ and Sed- first, and then sequentially add PA+ (Sequential); or 3) Stress Management Control that addresses stress, relaxation, and sleep. All participants will receive a smartphone application to self-monitor behaviors and regular coaching calls to help facilitate behavior change during the 9 month intervention. Healthy lifestyle change in fruit/vegetable and saturated fat intakes, sedentary leisure screen time, and physical activity will be assessed at 3, 6, and 9 months. MBC2 is a randomized m-Health intervention examining methods to maximize initiation and maintenance of multiple healthful behavior changes. Results from this trial will provide insight about an optimal technology supported approach to promote improvement in diet and physical activity. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Friedman, Miriam; And Others
1987-01-01
Test performances of sophomore medical students on a pretest and final exam (under guessing and no-guessing instructions) were compared. Discouraging random guessing produced test information with improved test reliability and less distortion of item difficulty. More able examinees were less compliant than less able examinees. (Author/RH)
ERIC Educational Resources Information Center
Spring, Bonnie; Pagoto, Sherry; Pingitore, Regina; Doran, Neal; Schneider, Kristin; Hedeker, Don
2004-01-01
The authors compared simultaneous versus sequential approaches to multiple health behavior change in diet, exercise, and cigarette smoking. Female regular smokers (N = 315) randomized to 3 conditions received 16 weeks of behavioral smoking treatment, quit smoking at Week 5, and were followed for 9 months after quit date. Weight management was…
An assessment of re-randomization methods in bark beetle (Scolytidae) trapping bioassays
Christopher J. Fettig; Christopher P. Dabney; Stepehen R. McKelvey; Robert R. Borys
2006-01-01
Numerous studies have explored the role of semiochemicals in the behavior of bark beetles (Scolytidae). Multiple funnel traps are often used to elucidate these behavioral responses. Sufficient sample sizes are obtained by using large numbers of traps to which treatments are randomly assigned once, or by frequent collection of trap catches and subsequent re-...
Schachtel, Bernard; Aspley, Sue; Shephard, Adrian; Shea, Timothy; Smith, Gary; Schachtel, Emily
2014-07-03
The sore throat pain model has been conducted by different clinical investigators to demonstrate the efficacy of acute analgesic drugs in single-dose randomized clinical trials. The model used here was designed to study the multiple-dose safety and efficacy of lozenges containing flurbiprofen at 8.75 mg. Adults (n=198) with moderate or severe acute sore throat and findings of pharyngitis on a Tonsillo-Pharyngitis Assessment (TPA) were randomly assigned to use either flurbiprofen 8.75 mg lozenges (n=101) or matching placebo lozenges (n=97) under double-blind conditions. Patients sucked one lozenge every three to six hours as needed, up to five lozenges per day, and rated symptoms on 100-mm scales: the Sore Throat Pain Intensity Scale (STPIS), the Difficulty Swallowing Scale (DSS), and the Swollen Throat Scale (SwoTS). Reductions in pain (lasting for three hours) and in difficulty swallowing and throat swelling (for four hours) were observed after a single dose of the flurbiprofen 8.75 mg lozenge (P<0.05 compared with placebo). After using multiple doses over 24 hours, flurbiprofen-treated patients experienced a 59% greater reduction in throat pain, 45% less difficulty swallowing, and 44% less throat swelling than placebo-treated patients (all P<0.01). There were no serious adverse events. Utilizing the sore throat pain model with multiple doses over 24 hours, flurbiprofen 8.75 mg lozenges were shown to be an effective, well-tolerated treatment for sore throat pain. Other pharmacologic actions (reduced difficulty swallowing and reduced throat swelling) and overall patient satisfaction from the flurbiprofen lozenges were also demonstrated in this multiple-dose implementation of the sore throat pain model. This trial was registered with ClinicalTrials.gov, registration number: NCT01048866, registration date: January 13, 2010.
Plow, Matthew; Bethoux, Francois; McDaniel, Corey; McGlynn, Mark; Marcus, Bess
2014-02-01
Investigate the feasibility and potential efficacy of a customized print-based intervention to promote physical activity and symptom self-management in women with multiple sclerosis. A randomly allocated two-group repeated measures design, with a delayed-treatment contact group serving as the control. Participants were randomized to receive the intervention immediately (n =14) or receive it at week 12 (n =16). Outcome measures were administered at weeks 1, 12, and 24. Community-based in metropolitan area. Thirty women with multiple sclerosis. Prescribing a home-exercise program and following up with customized pamphlets, which are matched to participants' stage of readiness to change physical activity behavior and physical activity barriers (e.g. encouraging self-management of symptoms). Physical Activity and Disability Survey-revised, Godin Leisure-Time Exercise Questionnaire, SF-12, Symptoms of Multiple Sclerosis Scale, and 6-minute walk test. Intent-to-treat analyses using mixed multivariate analysis of variance (MANOVA) were conducted on (1) physical activity levels and (2) health and function outcomes. The mixed MANOVAs for physical activity levels and health and function outcomes indicated significant improvements in the immediate group compared with the delayed group (i.e. condition by time interaction was significant, Wilks' λ = 0.59, F(2, 27) = 9.31, P = 0.001 and Wilks' λ = 0.70, F(4, 25) = 2.72, P = 0.052, respectively). The intervention had moderate to large effect sizes in improving physical activity levels (d = 0.63 to 0.89), perceptions of physical function (d = 0.63), and 6-minute walk test (d=0.86). This pilot study indicates that a customized print-based intervention shows promise in improving physical activity levels and health and function in women with multiple sclerosis.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
USDA-ARS?s Scientific Manuscript database
An intermittent fasting or calorie restriction diet has favorable effects in the mouse forms of multiple sclerosis (MS) and may provide additional anti-inflammatory and neuroprotective advantages beyond benefits obtained from weight loss alone. We conducted a pilot randomized controlled feeding stud...
ERIC Educational Resources Information Center
Davoudi, Mohammad; Chavosh, Milad
2016-01-01
The present paper aimed at investigating the relationship between listening self-efficacy and multiple intelligences of Iranian EFL learners. Initially, ninety intermediate male learners were selected randomly from among 20 intermediate classes in a Language Academy in Yazd. In order to assure the homogeneity of the participants in terms of…
Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?
ERIC Educational Resources Information Center
Reardon, Sean F.; Raudenbush, Stephen W.
2013-01-01
The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…
A rapid random-sampling method was used to relate densities of juvenile winter flounder to multiple scales of habitat variation in Narragansett Bay and two nearby coastal lagoons in Rhode Island. We used a 1-m beam trawl with attached video camera, continuous GPS track overlay, ...
A model for incomplete longitudinal multivariate ordinal data.
Liu, Li C
2008-12-30
In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369
Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Radiative transfer in multilayered random medium with laminar structure - Green's function approach
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1986-01-01
For a multilayered random medium with a laminar structure a Green's function approach is introduced to obtain the emitted intensity due to an arbitrary point source. It is then shown that the approach is applicable to both active and passive remote sensing. In active remote sensing, the computed radar backscattering cross section for the multilayered medium includes the effects of both volume multiple scattering and surface multiple scattering at the layer boundaries. In passive remote sensing, the brightness temperature is obtained for arbitrary temperature profiles in the layers. As an illustration the brightness temperature and reflectivity are calculated for a bounded layer and compared with results in the literature.
Weak convergence to isotropic complex [Formula: see text] random measure.
Wang, Jun; Li, Yunmeng; Sang, Liheng
2017-01-01
In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.
Ensemble Feature Learning of Genomic Data Using Support Vector Machine
Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.
2016-01-01
The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Gunn, Hilary; Markevics, Sophie; Haas, Bernhard; Marsden, Jonathan; Freeman, Jennifer
2015-10-01
To evaluate the effectiveness of interventions in reducing falls and/or improving balance as a falls risk in multiple sclerosis (MS). Computer-based and manual searches included the following medical subject heading keywords: "Multiple Sclerosis AND accidental falls" OR "Multiple Sclerosis AND postural balance" OR "Multiple Sclerosis AND exercise" OR "Multiple Sclerosis AND physical/physio therapy" NOT animals. All literature published to November 2014 with available full-text details were included. Studies were reviewed against the PICO (participants, interventions, comparisons, outcomes) selection criteria: P, adults with MS; I, falls management/balance rehabilitation interventions; C, randomized/quasi-randomized studies comparing intervention with usual care or placebo control; O, falls outcomes and measures of balance. Fifteen articles of the original 529 search results were included. Two reviewers independently extracted data and assessed methodological quality using the Cochrane Risk of Bias tool. Random-effects meta-analysis indicated a small decrease in falls risk (risk ratio, .74), although the 95% confidence interval (CI) crossed 1 (95% CI, .12-4.38). The pooled standardized mean difference (SMD) for balance outcomes was .55 (95% CI, .35-.74). SMD varied significantly between exercise subgroupings; gait, balance, and functional training interventions yielded the greatest pooled effect size (ES) (SMD=.82; 95% CI, 0.55-1.10). There was a moderate positive correlation between program volume (min/wk) and ES (Cohen's d) (r=.70, P=.009), and a moderate negative correlation between program duration in weeks and ES (r=-.62, P=.03). Variations in interventions and outcomes and methodological limitations mean that results must be viewed with caution. This review suggests that balance may improve through exercise interventions, but that the magnitude of the improvements achieved in existing programs may not be sufficient to impact falls outcomes. Supporting participants to achieve an appropriate intensity of practice of highly challenging balance activities appears to be critical to maximizing effectiveness. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Schumacher, Karen L; Plano Clark, Vicki L; West, Claudia M; Dodd, Marylin J; Rabow, Michael W; Miaskowski, Christine
2014-11-01
Oncology patients with persistent pain treated in outpatient settings and their family caregivers have significant responsibility for managing pain medications. However, little is known about their practical day-to-day experiences with pain medication management. The aim was to describe day-to-day pain medication management from the perspectives of oncology outpatients and their family caregivers who participated in a randomized clinical trial of a psychoeducational intervention called the Pro-Self(©) Plus Pain Control Program. In this article, we focus on pain medication management by patients and family caregivers in the context of multiple complex health systems. We qualitatively analyzed audio-recorded intervention sessions that included extensive dialogue between patients, family caregivers, and nurses about pain medication management during the 10-week intervention. The health systems context for pain medication management included multiple complex systems for clinical care, reimbursement, and regulation of analgesic prescriptions. Pain medication management processes particularly relevant to this context were getting prescriptions and obtaining medications. Responsibilities that fell primarily to patients and family caregivers included facilitating communication and coordination among multiple clinicians, overcoming barriers to access, and serving as a final safety checkpoint. Significant effort was required of patients and family caregivers to insure safe and effective pain medication management. Health systems issues related to access to needed analgesics, medication safety in outpatient settings, and the effort expended by oncology patients and their family caregivers require more attention in future research and health-care reform initiatives. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cushing, Patrick Ryan
This study compared the performance of high school students on laboratory assessments. Thirty-four high school students who were enrolled in the second semester of a regular biology class or had completed the biology course the previous semester participated in this study. They were randomly assigned to examinations of two formats, performance-task and traditional multiple-choice, from two content areas, using a compound light microscope and diffusion. Students were directed to think-aloud as they performed the assessments. Additional verbal data were obtained during interviews following the assessment. The tape-recorded narrative data were analyzed for type and diversity of knowledge and skill categories, and percentage of in-depth processing demonstrated. While overall mean scores on the assessments were low, elicited statements provided additional insight into student cognition. Results indicated that a greater diversity of knowledge and skill categories was elicited by the two microscope assessments and by the two performance-task assessments. In addition, statements demonstrating in-depth processing were coded most frequently in narratives elicited during clinical interviews following the diffusion performance-task assessment. This study calls for individual teachers to design authentic assessment practices and apply them to daily classroom routines. Authentic assessment should be an integral part of the learning process and not merely an end result. In addition, teachers are encouraged to explicitly identify and model, through think-aloud methods, desired cognitive behaviors in the classroom.
Tortorella, C; Romano, R; Direnzo, V; Taurisano, P; Zoccolella, S; Iaffaldano, P; Fazio, L; Viterbo, R; Popolizio, T; Blasi, G; Bertolino, A; Trojano, M
2013-08-01
Load-related functional magnetic resonance imaging (fMRI) abnormalities of brain activity during performance of attention tasks have been described in definite multiple sclerosis (MS). No data are available in clinically isolated syndrome (CIS) suggestive of MS. The objective of this research is to evaluate in CIS patients the fMRI pattern of brain activation during an attention task and to explore the effect of increasing task load demand on neurofunctional modifications. Twenty-seven untreated CIS patients and 32 age- and sex-matched healthy controls (HCs) underwent fMRI while performing the Variable Attentional Control (VAC) task, a cognitive paradigm requiring increasing levels of attentional control processing. Random-effects models were used for statistical analyses of fMRI data. CIS patients had reduced accuracy and greater reaction time at the VAC task compared with HCs (p=0.007). On blood oxygenation level-dependent (BOLD)-fMRI, CIS patients had greater activity in the right parietal cortex (p=0.0004) compared with HCs. Furthermore, CIS patients had greater activity at the lower (p=0.05) and reduced activity at the greater (p=0.04) level of attentional control demand in the left putamen, compared with HCs. This study demonstrates the failure of attentional control processing in CIS. The load-related fMRI dysfunction of the putamen supports the role of basal ganglia in the failure of attention observed at the earliest stage of MS.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Circulating polymerase chain reaction chips utilizing multiple-membrane activation
NASA Astrophysics Data System (ADS)
Wang, Chih-Hao; Chen, Yi-Yu; Liao, Chia-Sheng; Hsieh, Tsung-Min; Luo, Ching-Hsing; Wu, Jiunn-Jong; Lee, Huei-Huang; Lee, Gwo-Bin
2007-02-01
This paper reports a new micromachined, circulating, polymerase chain reaction (PCR) chip for nucleic acid amplification. The PCR chip is comprised of a microthermal control module and a polydimethylsiloxane (PDMS)-based microfluidic control module. The microthermal control modules are formed with three individual heating and temperature-sensing sections, each modulating a specific set temperature for denaturation, annealing and extension processes, respectively. Micro-pneumatic valves and multiple-membrane activations are used to form the microfluidic control module to transport sample fluids through three reaction regions. Compared with other PCR chips, the new chip is more compact in size, requires less time for heating and cooling processes, and has the capability to randomly adjust time ratios and cycle numbers depending on the PCR process. Experimental results showed that detection genes for two pathogens, Streptococcus pyogenes (S. pyogenes, 777 bps) and Streptococcus pneumoniae (S. pneumoniae, 273 bps), can be successfully amplified using the new circulating PCR chip. The minimum number of thermal cycles to amplify the DNA-based S. pyogenes for slab gel electrophoresis is 20 cycles with an initial concentration of 42.5 pg µl-1. Experimental data also revealed that a high reproducibility up to 98% could be achieved if the initial template concentration of the S. pyogenes was higher than 4 pg µl-1. The preliminary results of the current paper were presented at the 19th IEEE International Conference on Micro Electro Mechanical Systems (IEEE MEMS 2006), Istanbul, Turkey, 22-26 January, 2006.
Coates, Peter S.; Prochazka, Brian G.; Ricca, Mark A.; Halstead, Brian J.; Casazza, Michael L.; Blomberg, Erik J.; Brussee, Brianne E.; Wiechman, Lief; Tebbenkamp, Joel; Gardner, Scott C.; Reese, Kerry P.
2018-01-01
Consideration of ecological scale is fundamental to understanding and managing avian population growth and decline. Empirically driven models for population dynamics and demographic processes across multiple spatial scales can be powerful tools to help guide conservation actions. Integrated population models (IPMs) provide a framework for better parameter estimation by unifying multiple sources of data (e.g., count and demographic data). Hierarchical structure within such models that include random effects allow for varying degrees of data sharing across different spatiotemporal scales. We developed an IPM to investigate Greater Sage-Grouse (Centrocercus urophasianus) on the border of California and Nevada, known as the Bi-State Distinct Population Segment. Our analysis integrated 13 years of lek count data (n > 2,000) and intensive telemetry (VHF and GPS; n > 350 individuals) data across 6 subpopulations. Specifically, we identified the most parsimonious models among varying random effects and density-dependent terms for each population vital rate (e.g., nest survival). Using a joint likelihood process, we integrated the lek count data with the demographic models to estimate apparent abundance and refine vital rate parameter estimates. To investigate effects of climatic conditions, we extended the model to fit a precipitation covariate for instantaneous rate of change (r). At a metapopulation extent (i.e. Bi-State), annual population rate of change λ (er) did not favor an overall increasing or decreasing trend through the time series. However, annual changes in λ were driven by changes in precipitation (one-year lag effect). At subpopulation extents, we identified substantial variation in λ and demographic rates. One subpopulation clearly decoupled from the trend at the metapopulation extent and exhibited relatively high risk of extinction as a result of low egg fertility. These findings can inform localized, targeted management actions for specific areas, and status of the species for the larger Bi-State.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment
NASA Astrophysics Data System (ADS)
Piatnitski, A.; Zhizhina, E.
2017-11-01
The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.
Electrospun fibrinogen-PLA nanofibres for vascular tissue engineering.
Gugutkov, D; Gustavsson, J; Cantini, M; Salmeron-Sánchez, M; Altankov, G
2017-10-01
Here we report on the development of a new type of hybrid fibrinogen-polylactic acid (FBG-PLA) nanofibres (NFs) with improved stiffness, combining the good mechanical properties of PLA with the excellent cell recognition properties of native FBG. We were particularly interested in the dorsal and ventral cell response to the nanofibres' organization (random or aligned), using human umbilical endothelial cells (HUVECs) as a model system. Upon ventral contact with random NFs, the cells developed a stellate-like morphology with multiple projections. The well-developed focal adhesion complexes suggested a successful cellular interaction. However, time-lapse analysis shows significantly lowered cell movements, resulting in the cells traversing a relatively short distance in multiple directions. Conversely, an elongated cell shape and significantly increased cell mobility were observed in aligned NFs. To follow the dorsal cell response, artificial wounds were created on confluent cell layers previously grown on glass slides and covered with either random or aligned NFs. Time-lapse analysis showed significantly faster wound coverage (within 12 h) of HUVECs on aligned samples vs. almost absent directional migration on random ones. However, nitric oxide (NO) release shows that endothelial cells possess lowered functionality on aligned NFs compared to random ones, where significantly higher NO production was found. Collectively, our studies show that randomly organized NFs could support the endothelization of implants while aligned NFs would rather direct cell locomotion for guided neovascularization. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Cortese, Samuele; Ferrin, Maite; Brandeis, Daniel; Buitelaar, Jan; Daley, David; Dittmann, Ralf W.; Holtmann, Martin; Santosh, Paramala; Stevenson, Jim; Stringaris, Argyris; Zuddas, Alessandro; Sonuga-Barke, Edmund J.S.
2015-01-01
Objective The authors performed meta-analyses of randomized controlled trials to examine the effects of cognitive training on attention-deficit/hyperactivity disorder (ADHD) symptoms, neuropsychological deficits, and academic skills in children/adolescents with ADHD. Method The authors searched Pubmed, Ovid, Web of Science, ERIC, and CINAHAL databases through May 18, 2014. Data were aggregated using random-effects models. Studies were evaluated with the Cochrane risk of bias tool. Results Sixteen of 695 nonduplicate records were analyzed (759 children with ADHD). When all types of training were considered together, there were significant effects on total ADHD (standardized mean difference [SMD] = 0.37, 95% CI = 0.09–0.66) and inattentive symptoms (SMD = 0.47, 95% CI = 0.14–0.80) for reports by raters most proximal to the treatment setting (i.e., typically unblinded). These figures decreased substantially when the outcomes were provided by probably blinded raters (ADHD total: SMD = 0.20, 95% CI = 0.01–0.40; inattention: SMD = 0.32, 95% CI = −0.01 to 0.66). Effects on hyperactivity/impulsivity symptoms were not significant. There were significant effects on laboratory tests of working memory (verbal: SMD = 0.52, 95% CI = 0.24–0.80; visual: SMD = 0.47, 95% CI = 0.23–0.70) and parent ratings of executive function (SMD = 0.35, 95% CI = 0.08–0.61). Effects on academic performance were not statistically significant. There were no effects of working memory training, specifically on ADHD symptoms. Interventions targeting multiple neuropsychological deficits had large effects on ADHD symptoms rated by most proximal assessors (SMD = 0.79, 95% CI = 0.46–1.12). Conclusion Despite improving working memory performance, cognitive training had limited effects on ADHD symptoms according to assessments based on blinded measures. Approaches targeting multiple neuropsychological processes may optimize the transfer of effects from cognitive deficits to clinical symptoms. PMID:25721181
Cortese, Samuele; Ferrin, Maite; Brandeis, Daniel; Buitelaar, Jan; Daley, David; Dittmann, Ralf W; Holtmann, Martin; Santosh, Paramala; Stevenson, Jim; Stringaris, Argyris; Zuddas, Alessandro; Sonuga-Barke, Edmund J S
2015-03-01
The authors performed meta-analyses of randomized controlled trials to examine the effects of cognitive training on attention-deficit/hyperactivity disorder (ADHD) symptoms, neuropsychological deficits, and academic skills in children/adolescents with ADHD. The authors searched Pubmed, Ovid, Web of Science, ERIC, and CINAHAL databases through May 18, 2014. Data were aggregated using random-effects models. Studies were evaluated with the Cochrane risk of bias tool. Sixteen of 695 nonduplicate records were analyzed (759 children with ADHD). When all types of training were considered together, there were significant effects on total ADHD (standardized mean difference [SMD] = 0.37, 95% CI = 0.09-0.66) and inattentive symptoms (SMD = 0.47, 95% CI = 0.14-0.80) for reports by raters most proximal to the treatment setting (i.e., typically unblinded). These figures decreased substantially when the outcomes were provided by probably blinded raters (ADHD total: SMD = 0.20, 95% CI = 0.01-0.40; inattention: SMD = 0.32, 95% CI = -0.01 to 0.66). Effects on hyperactivity/impulsivity symptoms were not significant. There were significant effects on laboratory tests of working memory (verbal: SMD = 0.52, 95% CI = 0.24-0.80; visual: SMD = 0.47, 95% CI = 0.23-0.70) and parent ratings of executive function (SMD = 0.35, 95% CI = 0.08-0.61). Effects on academic performance were not statistically significant. There were no effects of working memory training, specifically on ADHD symptoms. Interventions targeting multiple neuropsychological deficits had large effects on ADHD symptoms rated by most proximal assessors (SMD = 0.79, 95% CI = 0.46-1.12). Despite improving working memory performance, cognitive training had limited effects on ADHD symptoms according to assessments based on blinded measures. Approaches targeting multiple neuropsychological processes may optimize the transfer of effects from cognitive deficits to clinical symptoms. Copyright © 2015 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Blatti, Charles; Sinha, Saurabh
2016-07-15
Analysis of co-expressed gene sets typically involves testing for enrichment of different annotations or 'properties' such as biological processes, pathways, transcription factor binding sites, etc., one property at a time. This common approach ignores any known relationships among the properties or the genes themselves. It is believed that known biological relationships among genes and their many properties may be exploited to more accurately reveal commonalities of a gene set. Previous work has sought to achieve this by building biological networks that combine multiple types of gene-gene or gene-property relationships, and performing network analysis to identify other genes and properties most relevant to a given gene set. Most existing network-based approaches for recognizing genes or annotations relevant to a given gene set collapse information about different properties to simplify (homogenize) the networks. We present a network-based method for ranking genes or properties related to a given gene set. Such related genes or properties are identified from among the nodes of a large, heterogeneous network of biological information. Our method involves a random walk with restarts, performed on an initial network with multiple node and edge types that preserve more of the original, specific property information than current methods that operate on homogeneous networks. In this first stage of our algorithm, we find the properties that are the most relevant to the given gene set and extract a subnetwork of the original network, comprising only these relevant properties. We then re-rank genes by their similarity to the given gene set, based on a second random walk with restarts, performed on the above subnetwork. We demonstrate the effectiveness of this algorithm for ranking genes related to Drosophila embryonic development and aggressive responses in the brains of social animals. DRaWR was implemented as an R package available at veda.cs.illinois.edu/DRaWR. blatti@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Bühnemann, Claudia; Li, Simon; Yu, Haiyue; Branford White, Harriet; Schäfer, Karl L; Llombart-Bosch, Antonio; Machado, Isidro; Picci, Piero; Hogendoorn, Pancras C W; Athanasou, Nicholas A; Noble, J Alison; Hassan, A Bassim
2014-01-01
Driven by genomic somatic variation, tumour tissues are typically heterogeneous, yet unbiased quantitative methods are rarely used to analyse heterogeneity at the protein level. Motivated by this problem, we developed automated image segmentation of images of multiple biomarkers in Ewing sarcoma to generate distributions of biomarkers between and within tumour cells. We further integrate high dimensional data with patient clinical outcomes utilising random survival forest (RSF) machine learning. Using material from cohorts of genetically diagnosed Ewing sarcoma with EWSR1 chromosomal translocations, confocal images of tissue microarrays were segmented with level sets and watershed algorithms. Each cell nucleus and cytoplasm were identified in relation to DAPI and CD99, respectively, and protein biomarkers (e.g. Ki67, pS6, Foxo3a, EGR1, MAPK) localised relative to nuclear and cytoplasmic regions of each cell in order to generate image feature distributions. The image distribution features were analysed with RSF in relation to known overall patient survival from three separate cohorts (185 informative cases). Variation in pre-analytical processing resulted in elimination of a high number of non-informative images that had poor DAPI localisation or biomarker preservation (67 cases, 36%). The distribution of image features for biomarkers in the remaining high quality material (118 cases, 104 features per case) were analysed by RSF with feature selection, and performance assessed using internal cross-validation, rather than a separate validation cohort. A prognostic classifier for Ewing sarcoma with low cross-validation error rates (0.36) was comprised of multiple features, including the Ki67 proliferative marker and a sub-population of cells with low cytoplasmic/nuclear ratio of CD99. Through elimination of bias, the evaluation of high-dimensionality biomarker distribution within cell populations of a tumour using random forest analysis in quality controlled tumour material could be achieved. Such an automated and integrated methodology has potential application in the identification of prognostic classifiers based on tumour cell heterogeneity.
Multiple Scattering in Planetary Regoliths Using Incoherent Interactions
NASA Astrophysics Data System (ADS)
Muinonen, K.; Markkanen, J.; Vaisanen, T.; Penttilä, A.
2017-12-01
We consider scattering of light by a planetary regolith using novel numerical methods for discrete random media of particles. Understanding the scattering process is of key importance for spectroscopic, photometric, and polarimetric modeling of airless planetary objects, including radar studies. In our modeling, the size of the spherical random medium can range from microscopic to macroscopic sizes, whereas the particles are assumed to be of the order of the wavelength in size. We extend the radiative transfer and coherent backscattering method (RT-CB) to the case of dense packing of particles by adopting the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path. Furthermore, we replace the far-field interactions of the RT-CB method with rigorous interactions facilitated by the Superposition T-matrix method (STMM). This gives rise to a new RT-RT method, radiative transfer with reciprocal interactions. For microscopic random media, we then compare the new results to asymptotically exact results computed using the STMM, succeeding in the numerical validation of the new methods.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.
Selecting materialized views using random algorithm
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Hu, Bin; Yang, Guohua; Zhao, Weixing; Zhang, Yingjiao; Zhao, Jindong
2007-03-01
MreB is a bacterial actin that plays important roles in determination of cell shape and chromosome partitioning in Escherichia coli and Caulobacter crescentus. In this study, the mreB from the filamentous cyanobacterium Anabaena sp. PCC 7120 was inactivated. Although the mreB null mutant showed a drastic change in cell shape, its growth rate, cell division and the filament length were unaltered. Thus, MreB in Anabaena maintains cell shape but is not required for chromosome partitioning. The wild type and the mutant had eight and 10 copies of chromosomes per cell respectively. We demonstrated that DNA content in two daughter cells after cell division in both strains was not always identical. The ratios of DNA content in two daughter cells had a Gaussian distribution with a standard deviation much larger than a value expected if the DNA content in two daughter cells were identical, suggesting that chromosome partitioning is a random process. The multiple copies of chromosomes in cyanobacteria are likely required for chromosome random partitioning in cell division.
High resolution identity testing of inactivated poliovirus vaccines
Mee, Edward T.; Minor, Philip D.; Martin, Javier
2015-01-01
Background Definitive identification of poliovirus strains in vaccines is essential for quality control, particularly where multiple wild-type and Sabin strains are produced in the same facility. Sequence-based identification provides the ultimate in identity testing and would offer several advantages over serological methods. Methods We employed random RT-PCR and high throughput sequencing to recover full-length genome sequences from monovalent and trivalent poliovirus vaccine products at various stages of the manufacturing process. Results All expected strains were detected in previously characterised products and the method permitted identification of strains comprising as little as 0.1% of sequence reads. Highly similar Mahoney and Sabin 1 strains were readily discriminated on the basis of specific variant positions. Analysis of a product known to contain incorrect strains demonstrated that the method correctly identified the contaminants. Conclusion Random RT-PCR and shotgun sequencing provided high resolution identification of vaccine components. In addition to the recovery of full-length genome sequences, the method could also be easily adapted to the characterisation of minor variant frequencies and distinction of closely related products on the basis of distinguishing consensus and low frequency polymorphisms. PMID:26049003
Continuous operation of four-state continuous-variable quantum key distribution system
NASA Astrophysics Data System (ADS)
Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2016-10-01
We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.
Gong, Zheng; Chen, Tianrun; Ratilal, Purnima; Makris, Nicholas C
2013-11-01
An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to estimate the temporal coherence of the acoustic field forward propagated through a continental-shelf waveguide containing random three-dimensional internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continental-shelf waveguide is shown to decay with a power-law of range to the -1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to be consistent with measured acoustic coherence time scales. The model should provide a useful prediction of the acoustic coherence time scale as a function of internal wave energy in continental-shelf environments. The acoustic coherence time scale is an important parameter in remote sensing applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and (ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary averaging.
Conductance Quantization in Resistive Random Access Memory
NASA Astrophysics Data System (ADS)
Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming
2015-10-01
The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.
Conductance Quantization in Resistive Random Access Memory.
Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming
2015-12-01
The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.
Nonconvergence of the Wang-Landau algorithms with multiple random walkers.
Belardinelli, R E; Pereyra, V D
2016-05-01
This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.
Doidge, James C
2018-02-01
Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.
Missing data and multiple imputation in clinical epidemiological research.
Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene
2017-01-01
Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.
Missing data and multiple imputation in clinical epidemiological research
Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene
2017-01-01
Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data. PMID:28352203
On the mapping associated with the complex representation of functions and processes.
NASA Technical Reports Server (NTRS)
Harger, R. O.
1972-01-01
The mapping between function spaces that is implied by the representation of a real 'bandpass' function by a complex 'low-pass' function is explicitly accepted. The discussion is extended to the representation of stationary random processes where the mapping is between spaces of random processes. This approach clarifies the nature of the complex representation, especially in the case of random processes and, in addition, derives the properties of the complex representation.-
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
NASA Astrophysics Data System (ADS)
Bakhtiar, Nurizatul Syarfinas Ahmad; Abdullah, Farah Aini; Hasan, Yahya Abu
2017-08-01
In this paper, we consider the dynamical behaviour of the random field on the pulsating and snaking solitons in a dissipative systems described by the one-dimensional cubic-quintic complex Ginzburg-Landau equation (cqCGLE). The dynamical behaviour of the random filed was simulated by adding a random field to the initial pulse. Then, we solve it numerically by fixing the initial amplitude profile for the pulsating and snaking solitons without losing any generality. In order to create the random field, we choose 0 ≤ ɛ ≤ 1.0. As a result, multiple soliton trains are formed when the random field is applied to a pulse like initial profile for the parameters of the pulsating and snaking solitons. The results also show the effects of varying the random field of the transient energy peaks in pulsating and snaking solitons.
A hybrid-type quantum random number generator
NASA Astrophysics Data System (ADS)
Hai-Qiang, Ma; Wu, Zhu; Ke-Jin, Wei; Rui-Xue, Li; Hong-Wei, Liu
2016-05-01
This paper proposes a well-performing hybrid-type truly quantum random number generator based on the time interval between two independent single-photon detection signals, which is practical and intuitive, and generates the initial random number sources from a combination of multiple existing random number sources. A time-to-amplitude converter and multichannel analyzer are used for qualitative analysis to demonstrate that each and every step is random. Furthermore, a carefully designed data acquisition system is used to obtain a high-quality random sequence. Our scheme is simple and proves that the random number bit rate can be dramatically increased to satisfy practical requirements. Project supported by the National Natural Science Foundation of China (Grant Nos. 61178010 and 11374042), the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China, and the Fundamental Research Funds for the Central Universities of China (Grant No. bupt2014TS01).
Straudi, Sofia; Manfredini, Fabio; Lamberti, Nicola; Zamboni, Paolo; Bernardi, Francesco; Marchetti, Giovanna; Pinton, Paolo; Bonora, Massimo; Secchiero, Paola; Tisato, Veronica; Volpato, Stefano; Basaglia, Nino
2017-02-27
Gait and mobility impairments affect the quality of life (QoL) of patients with progressive multiple sclerosis (MS). Robot-assisted gait training (RAGT) is an effective rehabilitative treatment but evidence of its superiority compared to other options is lacking. Furthermore, the response to rehabilitation is multidimensional, person-specific and possibly involves functional reorganization processes. The aims of this study are: (1) to test the effectiveness on gait speed, mobility, balance, fatigue and QoL of RAGT compared to conventional therapy (CT) in progressive MS and (2) to explore changes of clinical and circulating biomarkers of neural plasticity. This will be a parallel-group, randomized controlled trial design with the assessor blinded to the group allocation of participants. Ninety-eight (49 per arm) progressive MS patients (EDSS scale 6-7) will be randomly assigned to receive twelve 2-h training sessions over a 4-week period (three sessions/week) of either: (1) RAGT intervention on a robotic-driven gait orthosis (Lokomat, Hocoma, Switzerland). The training parameters (torque of the knee and hip drives, treadmill speed, body weight support) are set during the first session and progressively adjusted during training progression or (2) individual conventional physiotherapy focusing on over-ground walking training performed with the habitual walking device. The same assessors will perform outcome measurements at four time points: baseline (before the first intervention session); intermediate (after six training sessions); end of treatment (after the completion of 12 sessions); and follow-up (after 3 months from the end of the training program). The primary outcome is gait speed, assessed by the Timed 25-Foot Walk Test. We will also assess walking endurance, balance, depression, fatigue and QoL as well as instrumental laboratory markers (muscle metabolism, cerebral venous hemodynamics, cortical activation) and circulating laboratory markers (rare circulating cell populations pro and anti-inflammatory cytokines/chemokines, growth factors, neurotrophic factors, coagulation factors, other plasma proteins suggested by transcriptomic analysis and metabolic parameters). The RAGT training is expected to improve mobility compared to the active control intervention in progressive MS. Unique to this study is the analysis of various potential markers of plasticity in relation with clinical outcomes. ClinicalTrials.gov, identifier: NCT02421731 . Registered on 19 January 2015 (retrospectively registered).
Brookman-Frazee, Lauren; Stahmer, Aubyn C
2018-05-09
The Centers for Disease Control (2018) estimates that 1 in 59 children has autism spectrum disorder, and the annual cost of ASD in the U.S. is estimated to be $236 billion. Evidence-based interventions have been developed and demonstrate effectiveness in improving child outcomes. However, research on generalizable methods to scale up these practices in the multiple service systems caring for these children has been limited and is critical to meet this growing public health need. This project includes two, coordinated studies testing the effectiveness of the Translating Evidence-based Interventions (EBI) for ASD: Multi-Level Implementation Strategy (TEAMS) model. TEAMS focuses on improving implementation leadership, organizational climate, and provider attitudes and motivation in order to improve two key implementation outcomes-provider training completion and intervention fidelity and subsequent child outcomes. The TEAMS Leadership Institute applies implementation leadership strategies and TEAMS Individualized Provider Strategies for training applies motivational interviewing strategies to facilitate provider and organizational behavior change. A cluster randomized implementation/effectiveness Hybrid, type 3, trial with a dismantling design will be used to understand the effectiveness of TEAMS and the mechanisms of change across settings and participants. Study #1 will test the TEAMS model with AIM HI (An Individualized Mental Health Intervention for ASD) in publicly funded mental health services. Study #2 will test TEAMS with CPRT (Classroom Pivotal Response Teaching) in education settings. Thirty-seven mental health programs and 37 school districts will be randomized, stratified by county and study, to one of four groups (Standard Provider Training Only, Standard Provider Training + Leader Training, Enhanced Provider Training, Enhanced Provider Training + Leader Training) to test the effectiveness of combining standard, EBI-specific training with the two TEAMS modules individually and together on multiple implementation outcomes. Implementation outcomes including provider training completion, fidelity (coded by observers blind to group assignment) and child behavior change will be examined for 295 mental health providers, 295 teachers, and 590 children. This implementation intervention has the potential to increase quality of care for ASD in publicly funded settings by improving effectiveness of intervention implementation. The process and modules will be generalizable to multiple service systems, providers, and interventions, providing broad impact in community services. This study is registered with Clinicaltrials.gov ( NCT03380078 ). Registered 20 December 2017, retrospectively registered.
A Comprehensive Lifestyle Randomized Clinical Trial: Design and Initial Patient Experience.
Arun, Banu; Austin, Taylor; Babiera, Gildy V; Basen-Engquist, Karen; Carmack, Cindy L; Chaoul, Alejandro; Cohen, Lorenzo; Connelly, Lisa; Haddad, Robin; Harrison, Carol; Li, Yisheng; Mallaiah, Smitha; Nagarathna, Raghuram; Parker, Patricia A; Perkins, George H; Reuben, James M; Shih, Ya-Chen Tina; Spelman, Amy; Sood, Anil; Yang, Peiying; Yeung, Sai-Ching J
2017-03-01
Although epidemiological research demonstrates that there is an association between lifestyle factors and risk of breast cancer recurrence, progression of disease, and mortality, no comprehensive lifestyle change clinical trials have been conducted to determine if changing multiple risk factors leads to changes in biobehavioral processes and clinical outcomes in women with breast cancer. This article describes the design, feasibility, adherence to the intervention and data collection, and patient experience of a comprehensive lifestyle change clinical trial (CompLife). CompLife is a randomized, controlled trial of a multiple-behavior intervention focusing on diet, exercise, and mind-body practice along with behavioral counseling to support change. The initial exposure to the intervention takes place during the 4 to 6 weeks of radiotherapy (XRT) for women with stage III breast cancer and then across the subsequent 12 months. The intervention group will have 42 hours of in-person lifestyle counseling during XRT (7-10 hours a week) followed by up to 30 hours of counseling via video connection for the subsequent 12 months (weekly sessions for 6 months and then monthly for 6 months). The primary outcome is disease-free survival. Multiple secondary outcomes are being evaluated, including: (1) biological pathways; (2) overall survival; (3) patient-reported outcomes; (4) dietary patterns/fitness levels, anthropometrics, and body composition; and (5) economic outcomes. Qualitative data of the patient experience in the trial is collected from exit interviews, concluding remarks, direct email correspondences, and web postings from patients. Fifty-five patients have been recruited and randomized to the trial to date. Accrual of eligible patients is high (72%) and dropout rates extremely low (5%). Attendance to the in-person sessions is high (95% attending greater than 80% of sessions) as well as to the 30 hours of video counseling (88% attending more than 70% of sessions). Adherence to components of the behavior change intervention is high and compliance with the intensive amount of data collection is exceptional. Qualitative data collected from the participants reveals testimonials supporting the importance of the comprehensive nature of intervention, especially the mind-body/mindfulness component and social support, and meaningful lifestyle transformations. Conducting a comprehensive, multicomponent, lifestyle change clinical trial for women with breast was feasible and collection of biobehavioral outcomes successful. Adherence to behavior change was high and patient experience was overwhelmingly positive.
ERIC Educational Resources Information Center
Pawl, Andrew; Teodorescu, Raluca E.; Peterson, Joseph D.
2013-01-01
We have developed simple data-mining algorithms to assess the consistency and the randomness of student responses to problems consisting of multiple true or false statements. In this paper we describe the algorithms and use them to analyze data from introductory physics courses. We investigate statements that emerge as outliers because the class…
ERIC Educational Resources Information Center
Stuart, Elizabeth A.; Warkentien, Siri; Jo, Booil
2011-01-01
The purpose of the current project is to explore the use of propensity scores to estimate the effects of interventions within randomized control trials, accounting for varying levels of implementation or fidelity. This work extends that of Jo and Stuart (2009) to settings with multiple or continuous measures of implementation. Rather than focus…
ERIC Educational Resources Information Center
Adams, Catherine; Lockton, Elaine; Gaile, Jacqueline; Earl, Gillian; Freed, Jenny
2012-01-01
Background: Speech-language interventions are often complex in nature, involving multiple observations, variable outcomes and individualization in treatment delivery. The accepted procedure associated with randomized controlled trials (RCT) of such complex interventions is to develop and implement a manual of intervention in order that reliable…
ERIC Educational Resources Information Center
Verde, Pablo E.; Ohmann, Christian
2015-01-01
Researchers may have multiple motivations for combining disparate pieces of evidence in a meta-analysis, such as generalizing experimental results or increasing the power to detect an effect that a single study is not able to detect. However, while in meta-analysis, the main question may be simple, the structure of evidence available to answer it…
School Happiness and School Success: An Investigation across Multiple Grade Levels.
ERIC Educational Resources Information Center
Parish, Joycelyn Gay; Parish, Thomas S.; Batt, Steve
A total of 572 randomly selected sixth-grade students and 908 randomly selected ninth-grade students from a large metropolitan school district in the Midwest were asked to complete a series of survey questions designed to measure the extent to which they were happy while at school, as well as questions concerning the extent to which they treated…
Modeling of synchronization behavior of bursting neurons at nonlinearly coupled dynamical networks.
Çakir, Yüksel
2016-01-01
Synchronization behaviors of bursting neurons coupled through electrical and dynamic chemical synapses are investigated. The Izhikevich model is used with random and small world network of bursting neurons. Various currents which consist of diffusive electrical and time-delayed dynamic chemical synapses are used in the simulations to investigate the influences of synaptic currents and couplings on synchronization behavior of bursting neurons. The effects of parameters, such as time delay, inhibitory synaptic strengths, and decay time on synchronization behavior are investigated. It is observed that in random networks with no delay, bursting synchrony is established with the electrical synapse alone, single spiking synchrony is observed with hybrid coupling. In small world network with no delay, periodic bursting behavior with multiple spikes is observed when only chemical and only electrical synapse exist. Single-spike and multiple-spike bursting are established with hybrid couplings. A decrease in the synchronization measure is observed with zero time delay, as the decay time is increased in random network. For synaptic delays which are above active phase period, synchronization measure increases with an increase in synaptic strength and time delay in small world network. However, in random network, it increases with only an increase in synaptic strength.
Dinucleotide controlled null models for comparative RNA gene prediction.
Gesell, Tanja; Washietl, Stefan
2008-05-27
Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.