Extending the BEAGLE library to a multi-FPGA platform.
Jin, Zheming; Bakos, Jason D
2013-01-19
Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.
Extending the BEAGLE library to a multi-FPGA platform
2013-01-01
Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Hu, Meng; Clark, Kelsey L.; Gong, Xiajing; Noudoost, Behrad; Li, Mingyao; Moore, Tirin
2015-01-01
Inferotemporal (IT) neurons are known to exhibit persistent, stimulus-selective activity during the delay period of object-based working memory tasks. Frontal eye field (FEF) neurons show robust, spatially selective delay period activity during memory-guided saccade tasks. We present a copula regression paradigm to examine neural interaction of these two types of signals between areas IT and FEF of the monkey during a working memory task. This paradigm is based on copula models that can account for both marginal distribution over spiking activity of individual neurons within each area and joint distribution over ensemble activity of neurons between areas. Considering the popular GLMs as marginal models, we developed a general and flexible likelihood framework that uses the copula to integrate separate GLMs into a joint regression analysis. Such joint analysis essentially leads to a multivariate analog of the marginal GLM theory and hence efficient model estimation. In addition, we show that Granger causality between spike trains can be readily assessed via the likelihood ratio statistic. The performance of this method is validated by extensive simulations, and compared favorably to the widely used GLMs. When applied to spiking activity of simultaneously recorded FEF and IT neurons during working memory task, we observed significant Granger causality influence from FEF to IT, but not in the opposite direction, suggesting the role of the FEF in the selection and retention of visual information during working memory. The copula model has the potential to provide unique neurophysiological insights about network properties of the brain. PMID:26063909
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
ERIC Educational Resources Information Center
Criss, Amy H.; McClelland, James L.
2006-01-01
The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
Ellenberg, Leah; Liu, Qi; Gioia, Gerard; Yasui, Yutaka; Packer, Roger J.; Mertens, Ann; Donaldson, Sarah S.; Stovall, Marilyn; Kadan-Lottick, Nina; Armstrong, Gregory; Robison, Leslie L.; Zeltzer, Lonnie K.
2009-01-01
Background Among survivors of childhood cancer, those with Central Nervous System (CNS) malignancies have been found to be at greatest risk for neuropsychological dysfunction in the first few years following diagnosis and treatment. This study follows survivors to adulthood to assess the long term impact of childhood CNS malignancy and its treatment on neurocognitive functioning. Participants & Methods As part of the Childhood Cancer Survivor Study (CCSS), 802 survivors of childhood CNS malignancy, 5937 survivors of non-CNS malignancy and 382 siblings without cancer completed a 25 item Neurocognitive Questionnaire (CCSS-NCQ) at least 16 years post cancer diagnosis assessing task efficiency, emotional regulation, organizational skills and memory. Neurocognitive functioning in survivors of CNS malignancy was compared to that of non-CNS malignancy survivors and a sibling cohort. Within the group of CNS malignancy survivors, multiple linear regression was used to assess the contribution of demographic, illness and treatment variables to reported neurocognitive functioning and the relationship of reported neurocognitive functioning to educational, employment and income status. Results Survivors of CNS malignancy reported significantly greater neurocognitive impairment on all factors assessed by the CCSS-NCQ than non-CNS cancer survivors or siblings (p<.01), with mean T scores of CNS malignancy survivors substantially more impaired that those of the sibling cohort (p<.001), with a large effect size for Task Efficiency (1.16) and a medium effect size for Memory (.68). Within the CNS malignancy group, medical complications, including hearing deficits, paralysis and cerebrovascular incidents resulted in a greater likelihood of reported deficits on all of the CCSS-NCQ factors, with generally small effect sizes (.22-.50). Total brain irradiation predicted greater impairment on Task Efficiency and Memory (Effect sizes: .65 and .63, respectively), as did partial brain irradiation, with smaller effect sizes (.49 and .43, respectively). Ventriculoperitoneal (VP) shunt placement was associated with small deficits on the same scales (Effect sizes: Task Efficiency .26, Memory .32). Female gender predicted a greater likelihood of impaired scores on 2 scales, with small effect sizes (Task Efficiency .38, Emotional Regulation .45), while diagnosis before age 2 years resulted in less likelihood of reported impairment on the Memory factor with a moderate effect size (.64). CNS malignancy survivors with more impaired CCSS-NCQ scores demonstrated significantly lower educational attainment (p<.01), less household income (p<.001) and less full time employment (p<.001). Conclusions Survivors of childhood CNS malignancy are at significant risk for impairment in neurocognitive functioning in adulthood, particularly if they have received cranial radiation, had a VP shunt placed, suffered a cerebrovascular incident or are left with hearing or motor impairments. Reported neurocognitive impairment adversely affected important adult outcomes, including education, employment, income and marital status. PMID:19899829
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Rummel, Jan; Meiser, Thorsten
2016-01-01
Event-based prospective memory is the ability to remember to execute an intention when an environmental cue occurs. It has been argued that, due to their special meaning, these cues are discrepant from their environment and therefore are sometimes spontaneously noticed. In line with this assumption, the likelihood that an intention will be executed increases with increased cue-discrepancy. It is not yet clear, however, whether these improvements are due to facilitated spontaneous noticing rather than to an increase in the efficiency of controlled cue-processing. To further investigate the spontaneous nature of cue-discrepancy benefits, we presented participants with stimuli that were unrelated to the intention but discrepant from other stimuli. Therefore, we experimentally increased the processing fluency of some stimuli for participants currently holding an intention by using different priming procedures. We found that stimuli whose fluency was increased via spaced repeated stimulus presentation (Experiment 1) or via short pre-exposure (Experiment 2a to 3) elicited a tendency to fulfill the intention despite its actual inappropriateness. Findings were inconsistent as to whether cue-memory uncertainty fosters the reliance on cue discrepancy for intention retrieval (Experiments 2a and 3). Taken together, the present findings provide converging evidence for a spontaneous discrepancy-based prospective-memory process which works independent of controlled processes.
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
The use of cue familiarity during retrieval failure is affected by past versus future orientation.
Cleary, Anne M
2015-01-01
Cue familiarity that is brought on by cue resemblance to memory representations is useful for judging the likelihood of a past occurrence with an item that fails to actually be retrieved from memory. The present study examined the extent to which this type of resemblance-based cue familiarity is used in future-oriented judgments made during retrieval failure. Cue familiarity was manipulated using a previously-established method of creating differing degrees of feature overlap between the cue and studied items in memory, and the primary interest was in how these varying degrees of cue familiarity would influence future-oriented feeling-of-knowing (FOK) judgments given in instances of cued recall failure. The present results suggest that participants do use increases in resemblance-based cue familiarity to infer an increased likelihood of future recognition of an unretrieved target, but not to the extent that they use it to infer an increased likelihood of past experience with an unretrieved target. During retrieval failure, the increase in future-oriented FOK judgments with increasing cue familiarity was significantly less than the increase in past-oriented recognition judgments with increasing cue familiarity.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-11-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.
Scheins, J J; Vahedipour, K; Pietrzyk, U; Shah, N J
2015-12-21
For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation time is further reduced by using simultaneous multi-threading (SMT). A global speedup factor of 11 without SMT and above 100 with SMT has been achieved for the improved CPU-based implementation while obtaining equivalent numerical results.
NASA Astrophysics Data System (ADS)
Scheins, J. J.; Vahedipour, K.; Pietrzyk, U.; Shah, N. J.
2015-12-01
For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation time is further reduced by using simultaneous multi-threading (SMT). A global speedup factor of 11 without SMT and above 100 with SMT has been achieved for the improved CPU-based implementation while obtaining equivalent numerical results.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A.; Chen, Ying-Cheng
2018-05-01
Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.
Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency.
Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A; Chen, Ying-Cheng
2018-05-04
Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.
Neural suppression of irrelevant information underlies optimal working memory performance.
Zanto, Theodore P; Gazzaley, Adam
2009-03-11
Our ability to focus attention on task-relevant information and ignore distractions is reflected by differential enhancement and suppression of neural activity in sensory cortex (i.e., top-down modulation). Such selective, goal-directed modulation of activity may be intimately related to memory, such that the focus of attention biases the likelihood of successfully maintaining relevant information by limiting interference from irrelevant stimuli. Despite recent studies elucidating the mechanistic overlap between attention and memory, the relationship between top-down modulation of visual processing during working memory (WM) encoding, and subsequent recognition performance has not yet been established. Here, we provide neurophysiological evidence in healthy, young adults that top-down modulation of early visual processing (< 200 ms from stimulus onset) is intimately related to subsequent WM performance, such that the likelihood of successfully remembering relevant information is associated with limiting interference from irrelevant stimuli. The consequences of a failure to ignore distractors on recognition performance was replicated for two types of feature-based memory, motion direction and color. Moreover, attention to irrelevant stimuli was reflected neurally during the WM maintenance period as an increased memory load. These results suggest that neural enhancement of relevant information is not the primary determinant of high-level performance, but rather optimal WM performance is dependent on effectively filtering irrelevant information through neural suppression to prevent overloading a limited memory capacity.
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, Timothy; Gramacy, Robert; Franzke, Christian; Watkins, Nicholas
2016-04-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. We present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators [1]. In addition we show how the method can be used to perform joint inference of the stability exponent and the memory parameter when ARFIMA is extended to allow for alpha-stable innovations. Such models can be used to study systems where heavy tails and long range memory coexist. [1] Graves et al, Nonlin. Processes Geophys., 22, 679-700, 2015; doi:10.5194/npg-22-679-2015.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
1991-11-01
2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was
A shared neural ensemble links distinct contextual memories encoded close in time
NASA Astrophysics Data System (ADS)
Cai, Denise J.; Aharoni, Daniel; Shuman, Tristan; Shobe, Justin; Biane, Jeremy; Song, Weilin; Wei, Brandon; Veshkini, Michael; La-Vu, Mimi; Lou, Jerry; Flores, Sergio E.; Kim, Isaac; Sano, Yoshitake; Zhou, Miou; Baumgaertel, Karsten; Lavi, Ayal; Kamata, Masakazu; Tuszynski, Mark; Mayford, Mark; Golshani, Peyman; Silva, Alcino J.
2016-06-01
Recent studies suggest that a shared neural ensemble may link distinct memories encoded close in time. According to the memory allocation hypothesis, learning triggers a temporary increase in neuronal excitability that biases the representation of a subsequent memory to the neuronal ensemble encoding the first memory, such that recall of one memory increases the likelihood of recalling the other memory. Here we show in mice that the overlap between the hippocampal CA1 ensembles activated by two distinct contexts acquired within a day is higher than when they are separated by a week. Several findings indicate that this overlap of neuronal ensembles links two contextual memories. First, fear paired with one context is transferred to a neutral context when the two contexts are acquired within a day but not across a week. Second, the first memory strengthens the second memory within a day but not across a week. Older mice, known to have lower CA1 excitability, do not show the overlap between ensembles, the transfer of fear between contexts, or the strengthening of the second memory. Finally, in aged mice, increasing cellular excitability and activating a common ensemble of CA1 neurons during two distinct context exposures rescued the deficit in linking memories. Taken together, these findings demonstrate that contextual memories encoded close in time are linked by directing storage into overlapping ensembles. Alteration of these processes by ageing could affect the temporal structure of memories, thus impairing efficient recall of related information.
Striatal contributions to declarative memory retrieval
Scimeca, Jason M.; Badre, David
2012-01-01
Declarative memory is known to depend on the medial temporal lobe memory system. Recently, there has been renewed focus on the relationship between the basal ganglia and declarative memory, including the involvement of striatum. However, the contribution of striatum to declarative memory retrieval remains unknown. Here, we review neuroimaging and neuropsychological evidence for the involvement of the striatum in declarative memory retrieval. From this review, we propose that, along with the prefrontal cortex (PFC), the striatum primarily supports cognitive control of memory retrieval. We conclude by proposing three hypotheses for the specific role of striatum in retrieval: (1) Striatum modulates the re-encoding of retrieved items in accord with their expected utility (adaptive encoding), (2) striatum selectively admits information into working memory that is expected to increase the likelihood of successful retrieval (adaptive gating), and (3) striatum enacts adjustments in cognitive control based on the outcome of retrieval (reinforcement learning). PMID:22884322
Do Judgments of Learning Predict Automatic Influences of Memory?
ERIC Educational Resources Information Center
Undorf, Monika; Böhm, Simon; Cüpper, Lutz
2016-01-01
Current memory theories generally assume that memory performance reflects both recollection and automatic influences of memory. Research on people's predictions about the likelihood of remembering recently studied information on a memory test, that is, on judgments of learning (JOLs), suggests that both magnitude and resolution of JOLs are linked…
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
NASA Technical Reports Server (NTRS)
Kelly, D. A.; Fermelia, A.; Lee, G. K. F.
1990-01-01
An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
Beyond valence in the perception of likelihood: the role of emotion specificity.
DeSteno, D; Petty, R E; Wegener, D T; Rucker, D D
2000-03-01
Positive and negative moods have been shown to increase likelihood estimates of future events matching these states in valence (e.g., E. J. Johnson & A. Tversky, 1983). In the present article, 4 studies provide evidence that this congruency bias (a) is not limited to valence but functions in an emotion-specific manner, (b) derives from the informational value of emotions, and (c) is not the inevitable outcome of likelihood assessment under heightened emotion. Specifically, Study 1 demonstrates that sadness and anger, 2 distinct, negative emotions, differentially bias likelihood estimates of sad and angering events. Studies 2 and 3 replicate this finding in addition to supporting an emotion-as-information (cf. N. Schwarz & G. L. Clore, 1983), as opposed to a memory-based, mediating process for the bias. Finally, Study 4 shows that when the source of the emotion is salient, a reversal of the bias can occur given greater cognitive effort aimed at accuracy.
A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan
2011-01-01
Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Memory for faces: the effect of facial appearance and the context in which the face is encountered.
Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio
2015-03-01
We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Two C++ Libraries for Counting Trees on a Phylogenetic Terrace.
Biczok, R; Bozsoky, P; Eisenmann, P; Ernst, J; Ribizel, T; Scholz, F; Trefzer, A; Weber, F; Hamann, M; Stamatakis, A
2018-05-08
The presence of terraces in phylogenetic tree space, that is, a potentially large number of distinct tree topologies that have exactly the same analytical likelihood score, was first described by Sanderson et al. (2011). However, popular software tools for maximum likelihood and Bayesian phylogenetic inference do not yet routinely report, if inferred phylogenies reside on a terrace, or not. We believe, this is due to the lack of an efficient library to (i) determine if a tree resides on a terrace, (ii) calculate how many trees reside on a terrace, and (iii) enumerate all trees on a terrace. In our bioinformatics practical that is set up as a programming contest we developed two efficient and independent C++ implementations of the SUPERB algorithm by Constantinescu and Sankoff (1995) for counting and enumerating trees on a terrace. Both implementations yield exactly the same results, are more than one order of magnitude faster, and require one order of magnitude less memory than a previous 3rd party python implementation. The source codes are available under GNU GPL at https://github.com/terraphast. Alexandros.Stamatakis@h-its.org. Supplementary data are available at Bioinformatics online.
Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J
2015-09-03
RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.
Vernaz-Gris, Pierre; Huang, Kun; Cao, Mingtao; Sheremet, Alexandra S; Laurat, Julien
2018-01-25
Quantum memory for flying optical qubits is a key enabler for a wide range of applications in quantum information. A critical figure of merit is the overall storage and retrieval efficiency. So far, despite the recent achievements of efficient memories for light pulses, the storage of qubits has suffered from limited efficiency. Here we report on a quantum memory for polarization qubits that combines an average conditional fidelity above 99% and efficiency around 68%, thereby demonstrating a reversible qubit mapping where more information is retrieved than lost. The qubits are encoded with weak coherent states at the single-photon level and the memory is based on electromagnetically-induced transparency in an elongated laser-cooled ensemble of cesium atoms, spatially multiplexed for dual-rail storage. This implementation preserves high optical depth on both rails, without compromise between multiplexing and storage efficiency. Our work provides an efficient node for future tests of quantum network functionalities and advanced photonic circuits.
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
High Storage Efficiency and Large Fractional Delay of EIT-Based Memory
NASA Astrophysics Data System (ADS)
Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite
2013-05-01
In long-distance quantum communication and optical quantum computation, an efficient and long-lived quantum memory is an important component. We first experimentally demonstrated that a time-space-reversing method plus the optimum pulse shape can improve the storage efficiency (SE) of light pulses to 78% in cold media based on the effect of electromagnetically induced transparency (EIT). We obtain a large fractional delay of 74 at 50% SE, which is the best record so far. The measured classical fidelity of the recalled pulse is higher than 90% and nearly independent of the storage time, implying that the optical memory maintains excellent phase coherence. Our results suggest the current result may be readily applied to single-photon quantum states due to quantum nature of the EIT light-matter inference. This study advances the EIT-based quantum memory in practical quantum information applications.
The importance of material-processing interactions in inducing false memories.
Chan, Jason C K; McDermott, Kathleen B; Watson, Jason M; Gallo, David A
2005-04-01
Deep encoding, relative to shallow encoding, has been shown to increase the probability of false memories in the Deese/Roediger-McDermott (DRM) paradigm (Thapar & McDermott, 2001; Toglia, Neuschatz, & Goodwin, 1999). In two experiments, we showed important limitations on the generalizability of this phenomenon; these limitations are clearly predicted by existing theories regarding the mechanisms underlying such false memories (e.g., Roediger, Watson, McDermott, & Gallo, 2001). Specifically, asking subjects to attend to phonological relations among lists of phonologically associated words (e.g., weep, steep, etc.) increased the likelihood of false recall (Experiment 1) and false recognition (Experiment 2) of a related, nonpresented associate (e.g., sleep), relative to a condition in which subjects attended to meaningful relations among the words. These findings occurred along with a replication of prior findings (i.e., a semantic encoding task, relative to a phonological encoding task, enhanced the likelihood of false memory arising from a list of semantically associated words), and they place important constraints on theoretical explanations of false memory.
Memory: Why You're Losing It, How to Save It.
ERIC Educational Resources Information Center
Smith, Lee
1995-01-01
Describes the five types of memory: (1) semantic--what words and symbols mean; (2) implicit--how to do something such as ride a bike; (3) remote--data collected over the years; (4) working--short-term memory; and (5) episodic--recent experiences. Assesses the likelihood of each type's decaying over time. (JOW)
Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul
2017-01-01
We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels—comprising thousands of reference haplotypes—and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. PMID:28348060
2009-01-01
Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551
The effect of nonadiabaticity on the efficiency of quantum memory based on an optical cavity
NASA Astrophysics Data System (ADS)
Veselkova, N. G.; Sokolov, I. V.
2017-07-01
Quantum efficiency is an important characteristic of quantum memory devices that are aimed at recording the quantum state of light signals and its storing and reading. In the case of memory based on an ensemble of cold atoms placed in an optical cavity, the efficiency is restricted, in particular, by relaxation processes in the system of active atomic levels. We show how the effect of the relaxation on the quantum efficiency can be determined in a regime of the memory usage in which the evolution of signals in time is not arbitrarily slow on the scale of the field lifetime in the cavity and when the frequently used approximation of the adiabatic elimination of the quantized cavity mode field cannot be applied. Taking into account the effect of the nonadiabaticity on the memory quality is of interest in view of the fact that, in order to increase the field-medium coupling parameter, a higher cavity quality factor is required, whereas storing and processing of sequences of many signals in the memory implies that their duration is reduced. We consider the applicability of the well-known efficiency estimates via the system cooperativity parameter and estimate a more general form. In connection with the theoretical description of the memory of the given type, we also discuss qualitative differences in the behavior of a random source introduced into the Heisenberg-Langevin equations for atomic variables in the cases of a large and a small number of atoms.
Sequential structural damage diagnosis algorithm using a change point detection method
NASA Astrophysics Data System (ADS)
Noh, H.; Rajagopal, R.; Kiremidjian, A. S.
2013-11-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
ERIC Educational Resources Information Center
Konold, Clifford E.; Bates, John A.
1982-01-01
Significant correlations between measures of cognitive structure and performance were found using a procedure distinguishing between episodic and semantic memory as an heuristic with achievement test items. The design increased the likelihood of indications of semantic memory. Higher-order and lower-order cognitive processes are discussed.…
Associative false consumer memory: effects of need for cognition and encoding task.
Parker, Andrew; Dagnall, Neil
2018-04-01
Two experiments investigated the effects of product-attribute associations on false consumer memory. In both experiments, subjects were presented with sets of related product attributes under incidental encoding conditions. Later, recognition memory was tested with studied attributes, non-studied but associated attributes (critical lures) and non-studied unrelated attributes. In Experiment 1, the effect of Need for Cognition (NFC) was assessed. It was found that individuals high in NFC recognised more presented attributes and falsely recognised more associative critical lures. The increase in both true and associative false memory was accompanied by a greater number of responses that index the retrieval of detailed episodic-like information. Experiment 2, replicated the main findings through an experimental manipulation of the encoding task that required subjects to consider purchase likelihood. Explanations for these findings are considered from the perspective of activation processes and knowledge structures in the form of gist-based representations.
A predictive framework for evaluating models of semantic organization in free recall
Morton, Neal W; Polyn, Sean M.
2016-01-01
Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243
Thermally efficient and highly scalable In2Se3 nanowire phase change memory
NASA Astrophysics Data System (ADS)
Jin, Bo; Kang, Daegun; Kim, Jungsik; Meyyappan, M.; Lee, Jeong-Soo
2013-04-01
The electrical characteristics of nonvolatile In2Se3 nanowire phase change memory are reported. Size-dependent memory switching behavior was observed in nanowires of varying diameters and the reduction in set/reset threshold voltage was as low as 3.45 V/6.25 V for a 60 nm nanowire, which is promising for highly scalable nanowire memory applications. Also, size-dependent thermal resistance of In2Se3 nanowire memory cells was estimated with values as high as 5.86×1013 and 1.04×106 K/W for a 60 nm nanowire memory cell in amorphous and crystalline phases, respectively. Such high thermal resistances are beneficial for improvement of thermal efficiency and thus reduction in programming power consumption based on Fourier's law. The evaluation of thermal resistance provides an avenue to develop thermally efficient memory cell architecture.
Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem
2017-10-01
Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J
2010-08-01
Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.
High efficiency coherent optical memory with warm rubidium vapour
Hosseini, M.; Sparkes, B.M.; Campbell, G.; Lam, P.K.; Buchler, B.C.
2011-01-01
By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory. PMID:21285952
High efficiency coherent optical memory with warm rubidium vapour.
Hosseini, M; Sparkes, B M; Campbell, G; Lam, P K; Buchler, B C
2011-02-01
By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory.
Ha, S; Matej, S; Ispiryan, M; Mueller, K
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
NASA Astrophysics Data System (ADS)
Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
NASA Astrophysics Data System (ADS)
Natsui, Masanori; Hanyu, Takahiro
2018-04-01
In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.
NASA Astrophysics Data System (ADS)
Gujarati, Tanvi P.; Wu, Yukai; Duan, Luming
2018-03-01
Duan-Lukin-Cirac-Zoller quantum repeater protocol, which was proposed to realize long distance quantum communication, requires usage of quantum memories. Atomic ensembles interacting with optical beams based on off-resonant Raman scattering serve as convenient on-demand quantum memories. Here, a complete free space, three-dimensional theory of the associated read and write process for this quantum memory is worked out with the aim of understanding intrinsic retrieval efficiency. We develop a formalism to calculate the transverse mode structure for the signal and the idler photons and use the formalism to study the intrinsic retrieval efficiency under various configurations. The effects of atomic density fluctuations and atomic motion are incorporated by numerically simulating this system for a range of realistic experimental parameters. We obtain results that describe the variation in the intrinsic retrieval efficiency as a function of the memory storage time for skewed beam configuration at a finite temperature, which provides valuable information for optimization of the retrieval efficiency in experiments.
Simple Atomic Quantum Memory Suitable for Semiconductor Quantum Dot Single Photons
NASA Astrophysics Data System (ADS)
Wolters, Janik; Buser, Gianni; Horsley, Andrew; Béguin, Lucas; Jöckel, Andreas; Jahn, Jan-Philipp; Warburton, Richard J.; Treutlein, Philipp
2017-08-01
Quantum memories matched to single photon sources will form an important cornerstone of future quantum network technology. We demonstrate such a memory in warm Rb vapor with on-demand storage and retrieval, based on electromagnetically induced transparency. With an acceptance bandwidth of δ f =0.66 GHz , the memory is suitable for single photons emitted by semiconductor quantum dots. In this regime, vapor cell memories offer an excellent compromise between storage efficiency, storage time, noise level, and experimental complexity, and atomic collisions have negligible influence on the optical coherences. Operation of the memory is demonstrated using attenuated laser pulses on the single photon level. For a 50 ns storage time, we measure ηe2 e 50 ns=3.4 (3 )% end-to-end efficiency of the fiber-coupled memory, with a total intrinsic efficiency ηint=17 (3 )%. Straightforward technological improvements can boost the end-to-end-efficiency to ηe 2 e≈35 %; beyond that, increasing the optical depth and exploiting the Zeeman substructure of the atoms will allow such a memory to approach near unity efficiency. In the present memory, the unconditional read-out noise level of 9 ×10-3 photons is dominated by atomic fluorescence, and for input pulses containing on average μ1=0.27 (4 ) photons, the signal to noise level would be unity.
Simple Atomic Quantum Memory Suitable for Semiconductor Quantum Dot Single Photons.
Wolters, Janik; Buser, Gianni; Horsley, Andrew; Béguin, Lucas; Jöckel, Andreas; Jahn, Jan-Philipp; Warburton, Richard J; Treutlein, Philipp
2017-08-11
Quantum memories matched to single photon sources will form an important cornerstone of future quantum network technology. We demonstrate such a memory in warm Rb vapor with on-demand storage and retrieval, based on electromagnetically induced transparency. With an acceptance bandwidth of δf=0.66 GHz, the memory is suitable for single photons emitted by semiconductor quantum dots. In this regime, vapor cell memories offer an excellent compromise between storage efficiency, storage time, noise level, and experimental complexity, and atomic collisions have negligible influence on the optical coherences. Operation of the memory is demonstrated using attenuated laser pulses on the single photon level. For a 50 ns storage time, we measure η_{e2e}^{50 ns}=3.4(3)% end-to-end efficiency of the fiber-coupled memory, with a total intrinsic efficiency η_{int}=17(3)%. Straightforward technological improvements can boost the end-to-end-efficiency to η_{e2e}≈35%; beyond that, increasing the optical depth and exploiting the Zeeman substructure of the atoms will allow such a memory to approach near unity efficiency. In the present memory, the unconditional read-out noise level of 9×10^{-3} photons is dominated by atomic fluorescence, and for input pulses containing on average μ_{1}=0.27(4) photons, the signal to noise level would be unity.
NASA Astrophysics Data System (ADS)
Saputro, Dewi Retno Sari; Widyaningsih, Purnami
2017-08-01
In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).
Phenotypic regional fMRI activation patterns during memory encoding in MCI and AD
Browndyke, Jeffrey N.; Giovanello, Kelly; Petrella, Jeffrey; Hayden, Kathleen; Chiba-Falek, Ornit; Tucker, Karen A.; Burke, James R.; Welsh-Bohmer, Kathleen A.
2014-01-01
Background Reliable blood-oxygen-level-dependent (BOLD) fMRI phenotypic biomarkers of Alzheimer's disease (AD) or mild cognitive impairment (MCI) are likely to emerge only from a systematic, quantitative, and aggregate examination of the functional neuroimaging research literature. Methods A series of random-effects, activation likelihood estimation (ALE) meta-analyses were conducted on studies of episodic memory encoding operations in AD and MCI samples relative to normal controls. ALE analyses were based upon a thorough literature search for all task-based functional neuroimaging studies in AD and MCI published up to January 2010. Analyses covered 16 fMRI studies, which yielded 144 distinct foci for ALE meta-analysis. Results ALE results indicated several regional task-based BOLD consistencies in MCI and AD patients relative to normal controls across the aggregate BOLD functional neuroimaging research literature. Patients with AD and those at significant risk (MCI) showed statistically significant consistent activation differences during episodic memory encoding in the medial temporal lobe (MTL), specifically parahippocampal gyrus, as well superior frontal gyrus, precuneus, and cuneus, relative to normal controls. Conclusions ALE consistencies broadly support the presence of frontal compensatory activity, MTL activity alteration, and posterior midline “default mode” hyperactivation during episodic memory encoding attempts in the diseased or prospective pre-disease condition. Taken together these robust commonalities may form the foundation for a task-based fMRI phenotype of memory encoding in AD. PMID:22841497
Research about Memory Detection Based on the Embedded Platform
NASA Astrophysics Data System (ADS)
Sun, Hao; Chu, Jian
As is known to us all, the resources of memory detection of the embedded systems are very limited. Taking the Linux-based embedded arm as platform, this article puts forward two efficient memory detection technologies according to the characteristics of the embedded software. Especially for the programs which need specific libraries, the article puts forwards portable memory detection methods to help program designers to reduce human errors,improve programming quality and therefore make better use of the valuable embedded memory resource.
Koutstaal, Wilma
2003-03-01
Investigations of memory deficits in older individuals have concentrated on their increased likelihood of forgetting events or details of events that were actually encountered (errors of omission). However, mounting evidence demonstrates that normal cognitive aging also is associated with an increased propensity for errors of commission--shown in false alarms or false recognition. The present study examined the origins of this age difference. Older and younger adults each performed three types of memory tasks in which details of encountered items might influence performance. Although older adults showed greater false recognition of related lures on a standard (identical) old/new episodic recognition task, older and younger adults showed parallel effects of detail on repetition priming and meaning-based episodic recognition (decreased priming and decreased meaning-based recognition for different relative to same exemplars). The results suggest that the older adults encoded details but used them less effectively than the younger adults in the recognition context requiring their deliberate, controlled use.
Coherent Optical Memory with High Storage Efficiency and Large Fractional Delay
NASA Astrophysics Data System (ADS)
Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite A.
2013-02-01
A high-storage efficiency and long-lived quantum memory for photons is an essential component in long-distance quantum communication and optical quantum computation. Here, we report a 78% storage efficiency of light pulses in a cold atomic medium based on the effect of electromagnetically induced transparency. At 50% storage efficiency, we obtain a fractional delay of 74, which is the best up-to-date record. The classical fidelity of the recalled pulse is better than 90% and nearly independent of the storage time, as confirmed by the direct measurement of phase evolution of the output light pulse with a beat-note interferometer. Such excellent phase coherence between the stored and recalled light pulses suggests that the current result may be readily applied to single photon wave packets. Our work significantly advances the technology of electromagnetically induced transparency-based optical memory and may find practical applications in long-distance quantum communication and optical quantum computation.
Coherent optical memory with high storage efficiency and large fractional delay.
Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite A
2013-02-22
A high-storage efficiency and long-lived quantum memory for photons is an essential component in long-distance quantum communication and optical quantum computation. Here, we report a 78% storage efficiency of light pulses in a cold atomic medium based on the effect of electromagnetically induced transparency. At 50% storage efficiency, we obtain a fractional delay of 74, which is the best up-to-date record. The classical fidelity of the recalled pulse is better than 90% and nearly independent of the storage time, as confirmed by the direct measurement of phase evolution of the output light pulse with a beat-note interferometer. Such excellent phase coherence between the stored and recalled light pulses suggests that the current result may be readily applied to single photon wave packets. Our work significantly advances the technology of electromagnetically induced transparency-based optical memory and may find practical applications in long-distance quantum communication and optical quantum computation.
Brain oscillatory substrates of visual short-term memory capacity.
Sauseng, Paul; Klimesch, Wolfgang; Heise, Kirstin F; Gruber, Walter R; Holz, Elisa; Karim, Ahmed A; Glennon, Mark; Gerloff, Christian; Birbaumer, Niels; Hummel, Friedhelm C
2009-11-17
The amount of information that can be stored in visual short-term memory is strictly limited to about four items. Therefore, memory capacity relies not only on the successful retention of relevant information but also on efficient suppression of distracting information, visual attention, and executive functions. However, completely separable neural signatures for these memory capacity-limiting factors remain to be identified. Because of its functional diversity, oscillatory brain activity may offer a utile solution. In the present study, we show that capacity-determining mechanisms, namely retention of relevant information and suppression of distracting information, are based on neural substrates independent of each other: the successful maintenance of relevant material in short-term memory is associated with cross-frequency phase synchronization between theta (rhythmical neural activity around 5 Hz) and gamma (> 50 Hz) oscillations at posterior parietal recording sites. On the other hand, electroencephalographic alpha activity (around 10 Hz) predicts memory capacity based on efficient suppression of irrelevant information in short-term memory. Moreover, repetitive transcranial magnetic stimulation at alpha frequency can modulate short-term memory capacity by influencing the ability to suppress distracting information. Taken together, the current study provides evidence for a double dissociation of brain oscillatory correlates of visual short-term memory capacity.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Configurable memory system and method for providing atomic counting operations in a memory device
Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin
2010-09-14
A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.
Light storage in a cold atomic ensemble with a high optical depth
NASA Astrophysics Data System (ADS)
Park, Kwang-Kyoon; Chough, Young-Tak; Kim, Yoon-Ho
2017-06-01
A quantum memory with a high storage efficiency and a long coherence time is an essential element in quantum information applications. Here, we report our recent development of an optical quantum memory with a rubidium-87 cold atom ensemble. By increasing the optical depth of the medium, we have achieved a storage efficiency of 65% and a coherence time of 51 μs for a weak laser pulse. The result of a numerical analysis based on the Maxwell-Bloch equations agrees well with the experimental results. Our result paves the way toward an efficient optical quantum memory and may find applications in photonic quantum information processing.
Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sczyrba, Alex; Pratap, Abhishek; Canon, Shane
2011-03-22
Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less
Energy-efficient writing scheme for magnetic domain-wall motion memory
NASA Astrophysics Data System (ADS)
Kim, Kab-Jin; Yoshimura, Yoko; Ham, Woo Seung; Ernst, Rick; Hirata, Yuushou; Li, Tian; Kim, Sanghoon; Moriyama, Takahiro; Nakatani, Yoshinobu; Ono, Teruo
2017-04-01
We present an energy-efficient magnetic domain-writing scheme for domain wall (DW) motion-based memory devices. A cross-shaped nanowire is employed to inject a domain into the nanowire through current-induced DW propagation. The energy required for injecting the magnetic domain is more than one order of magnitude lower than that for the conventional field-based writing scheme. The proposed scheme is beneficial for device miniaturization because the threshold current for DW propagation scales with the device size, which cannot be achieved in the conventional field-based technique.
Framing Affects Scale Usage for Judgments of Learning, Not Confidence in Memory
ERIC Educational Resources Information Center
England, Benjamin D.; Ortegren, Francesca R.; Serra, Michael J.
2017-01-01
Framing metacognitive judgments of learning (JOLs) in terms of the likelihood of forgetting rather than remembering consistently yields a counterintuitive outcome: The mean of participants' forget-framed JOLs is often higher (after reverse-scoring) than the mean of their remember-framed JOLs, suggesting greater confidence in memory. In the present…
Ignoring Memory Hints: The Stubborn Influence of Environmental Cues on Recognition Memory
ERIC Educational Resources Information Center
Selmeczy, Diana; Dobbins, Ian G.
2017-01-01
Recognition judgments can benefit from the use of environmental cues that signal the general likelihood of encountering familiar versus unfamiliar stimuli. While incorporating such cues is often adaptive, there are circumstances (e.g., eyewitness testimony) in which observers should fully ignore environmental cues in order to preserve memory…
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
Efficient detection of dangling pointer error for C/C++ programs
NASA Astrophysics Data System (ADS)
Zhang, Wenzhe
2017-08-01
Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.
A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations
Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang
2008-01-01
Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033
Simple and Efficient Single Photon Filter for a Rb-based Quantum Memory
NASA Astrophysics Data System (ADS)
Stack, Daniel; Li, Xiao; Quraishi, Qudsia
2015-05-01
Distribution of entangled quantum states over significant distances is important to the development of future quantum technologies such as long-distance cryptography, networks of atomic clocks, distributed quantum computing, etc. Long-lived quantum memories and single photons are building blocks for systems capable of realizing such applications. The ability to store and retrieve quantum information while filtering unwanted light signals is critical to the operation of quantum memories based on neutral-atom ensembles. We report on an efficient frequency filter which uses a glass cell filled with 85Rb vapor to attenuate noise photons by an order of magnitude with little loss to the single photons associated with the operation of our cold 87Rb quantum memory. An Ar buffer gas is required to differentiate between signal and noise photons or similar statement. Our simple, passive filter requires no optical pumping or external frequency references and provides an additional 18 dB attenuation of our pump laser for every 1 dB loss of the single photon signal. We observe improved non-classical correlations and our data shows that the addition of a frequency filter increases the non-classical correlations and readout efficiency of our quantum memory by ~ 35%.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.
Kurkela, Kyle A; Dennis, Nancy A
2016-01-29
Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe. Copyright © 2015 Elsevier Ltd. All rights reserved.
PIMS: Memristor-Based Processing-in-Memory-and-Storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Jeanine
Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less
Memory Detection 2.0: The First Web-Based Memory Detection Test
Kleinberg, Bennett; Verschuere, Bruno
2015-01-01
There is accumulating evidence that reaction times (RTs) can be used to detect recognition of critical (e.g., crime) information. A limitation of this research base is its reliance upon small samples (average n = 24), and indications of publication bias. To advance RT-based memory detection, we report upon the development of the first web-based memory detection test. Participants in this research (Study1: n = 255; Study2: n = 262) tried to hide 2 high salient (birthday, country of origin) and 2 low salient (favourite colour, favourite animal) autobiographical details. RTs allowed to detect concealed autobiographical information, and this, as predicted, more successfully so than error rates, and for high salient than for low salient items. While much remains to be learned, memory detection 2.0 seems to offer an interesting new platform to efficiently and validly conduct RT-based memory detection research. PMID:25874966
Memory-Efficient Analysis of Dense Functional Connectomes.
Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.
Memory-Efficient Analysis of Dense Functional Connectomes
Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.
Zierke, Stephanie; Bakos, Jason D
2010-04-12
Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).
Processing-in-Memory Enabled Graphics Processors for 3D Rendering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Chenhao; Song, Shuaiwen; Wang, Jing
2017-02-06
The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less
Strategic offloading of delayed intentions into the external environment.
Gilbert, Sam J
2015-01-01
In everyday life, we often use external artefacts such as diaries to help us remember intended behaviours. In addition, we commonly manipulate our environment, for example by placing reminders in noticeable places. Yet strategic offloading of intentions to the external environment is not typically permitted in laboratory tasks examining memory for delayed intentions. What factors influence our use of such strategies, and what behavioural consequences do they have? This article describes four online experiments (N = 1196) examining a novel web-based task in which participants hold intentions for brief periods, with the option to strategically externalize these intentions by creating a reminder. This task significantly predicted participants' fulfilment of a naturalistic intention embedded within their everyday activities up to one week later (with greater predictive ability than more traditional prospective memory tasks, albeit with weak effect size). Setting external reminders improved performance, and it was more prevalent in older adults. Furthermore, participants set reminders adaptively, based on (a) memory load, and (b) the likelihood of distraction. These results suggest the importance of metacognitive processes in triggering intention offloading, which can increase the probability that intentions are eventually fulfilled.
An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, R; Stolken, J; Jannetti, C
Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numericalmore » simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.« less
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
The Cognitive Bases of Intelligence Analysis.
1984-01-01
the truth of a single proposition or to discriminate among several propositions. Indicators represent the potentially observable events that form the ...serves as a checklist against which to evaluate an actual Intelligance product. * If the Ideal product Is specified In sufficient detail for a particular...34 Interf’arence In accessing memory occurs for both recognition and recall. Memory retrieval is most efficient when the memories are discriminable . Memories for
Wu, Yufeng
2012-03-01
Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.
Changes in Brain Network Efficiency and Working Memory Performance in Aging
Stanley, Matthew L.; Simpson, Sean L.; Dagenbach, Dale; Lyday, Robert G.; Burdette, Jonathan H.; Laurienti, Paul J.
2015-01-01
Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory. PMID:25875001
Changes in brain network efficiency and working memory performance in aging.
Stanley, Matthew L; Simpson, Sean L; Dagenbach, Dale; Lyday, Robert G; Burdette, Jonathan H; Laurienti, Paul J
2015-01-01
Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory.
Optimal designs based on the maximum quasi-likelihood estimator
Shen, Gang; Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359
NASA Astrophysics Data System (ADS)
Su, Lihong
In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.
Working Memory and Processing Efficiency in Children's Reasoning.
ERIC Educational Resources Information Center
Halford, Graeme S.; And Others
A series of studies was conducted to determine whether children's reasoning is capacity-limited and whether any such capacity, if it exists, is based on the working memory system. An N-term series (transitive inference) was used as the primary task in an interference paradigm. A concurrent short-term memory load was employed as the secondary task.…
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Negative effects of item repetition on source memory.
Kim, Kyungmi; Yi, Do-Joon; Raye, Carol L; Johnson, Marcia K
2012-08-01
In the present study, we explored how item repetition affects source memory for new item-feature associations (picture-location or picture-color). We presented line drawings varying numbers of times in Phase 1. In Phase 2, each drawing was presented once with a critical new feature. In Phase 3, we tested memory for the new source feature of each item from Phase 2. Experiments 1 and 2 demonstrated and replicated the negative effects of item repetition on incidental source memory. Prior item repetition also had a negative effect on source memory when different source dimensions were used in Phases 1 and 2 (Experiment 3) and when participants were explicitly instructed to learn source information in Phase 2 (Experiments 4 and 5). Importantly, when the order between Phases 1 and 2 was reversed, such that item repetition occurred after the encoding of critical item-source combinations, item repetition no longer affected source memory (Experiment 6). Overall, our findings did not support predictions based on item predifferentiation, within-dimension source interference, or general interference from multiple traces of an item. Rather, the findings were consistent with the idea that prior item repetition reduces attention to subsequent presentations of the item, decreasing the likelihood that critical item-source associations will be encoded.
Pattacini, Laura; Baeten, Jared M.; Thomas, Katherine K.; Fluharty, Tayler R.; Murnane, Pamela M.; Donnell, Deborah; Bukusi, Elizabeth; Ronald, Allan; Mugo, Nelly; Lingappa, Jairam R.; Celum, Connie; McElrath, M. Juliana; Lund, Jennifer M.
2015-01-01
Objective Two distinct hypotheses have been proposed for T-cell involvement in protection from HIV-1 acquisition. First, HIV-1-specific memory T-cell responses generated upon HIV-1 exposure could mount an efficient response to HIV-1 and inhibit the establishment of an infection. Second, a lower level of immune activation could reduce the numbers of activated, HIV-1-susceptible CD4+ T-cells, thereby diminishing the likelihood of infection. Methods To test these hypotheses, we conducted a prospective study among high-risk heterosexual men and women, and tested peripheral blood samples from individuals who subsequently acquired HIV-1 during follow-up (cases) and from a subset of those who remained HIV-1 uninfected (controls). Results We found no difference in HIV-1-specific immune responses between cases and controls, but Treg frequency was higher in controls as compared to cases and was negatively associated with frequency of effector memory CD4+ T-cells. Conclusions Our findings support the hypothesis that low immune activation assists in protection from HIV-1 infection. PMID:26656786
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
A comparison of PCA/ICA for data preprocessing in remote sensing imagery classification
NASA Astrophysics Data System (ADS)
He, Hui; Yu, Xianchuan
2005-10-01
In this paper a performance comparison of a variety of data preprocessing algorithms in remote sensing image classification is presented. These selected algorithms are principal component analysis (PCA) and three different independent component analyses, ICA (Fast-ICA (Aapo Hyvarinen, 1999), Kernel-ICA (KCCA and KGV (Bach & Jordan, 2002), EFFICA (Aiyou Chen & Peter Bickel, 2003). These algorithms were applied to a remote sensing imagery (1600×1197), obtained from Shunyi, Beijing. For classification, a MLC method is used for the raw and preprocessed data. The results show that classification with the preprocessed data have more confident results than that with raw data and among the preprocessing algorithms, ICA algorithms improve on PCA and EFFICA performs better than the others. The convergence of these ICA algorithms (for data points more than a million) are also studied, the result shows EFFICA converges much faster than the others. Furthermore, because EFFICA is a one-step maximum likelihood estimate (MLE) which reaches asymptotic Fisher efficiency (EFFICA), it computers quite small so that its demand of memory come down greatly, which settled the "out of memory" problem occurred in the other algorithms.
User-Assisted Store Recycling for Dynamic Task Graph Schedulers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan
The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less
Spin-transfer torque switched magnetic tunnel junctions in magnetic random access memory
NASA Astrophysics Data System (ADS)
Sun, Jonathan Z.
2016-10-01
Spin-transfer torque (or spin-torque, or STT) based magnetic tunnel junction (MTJ) is at the heart of a new generation of magnetism-based solid-state memory, the so-called spin-transfer-torque magnetic random access memory, or STT-MRAM. Over the past decades, STT-based switchable magnetic tunnel junction has seen progress on many fronts, including the discovery of (001) MgO as the most favored tunnel barrier, which together with (bcc) Fe or FeCo alloy are yielding best demonstrated tunnel magneto-resistance (TMR); the development of perpendicularly magnetized ultrathin CoFeB-type of thin films sufficient to support high density memories with junction sizes demonstrated down to 11nm in diameter; and record-low spin-torque switching threshold current, giving best reported switching efficiency over 5 kBT/μA. Here we review the basic device properties focusing on the perpendicularly magnetized MTJs, both in terms of switching efficiency as measured by sub-threshold, quasi-static methods, and of switching speed at super-threshold, forced switching. We focus on device behaviors important for memory applications that are rooted in fundamental device physics, which highlights the trade-off of device parameters for best suitable system integration.
Image classification at low light levels
NASA Astrophysics Data System (ADS)
Wernick, Miles N.; Morris, G. Michael
1986-12-01
An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.
Dissociable effects of top-down and bottom-up attention during episodic encoding
Uncapher, Melina R.; Hutchinson, J. Benjamin; Wagner, Anthony D.
2011-01-01
It is well established that the formation of memories for life’s experiences—episodic memory—is influenced by how we attend to those experiences, yet the neural mechanisms by which attention shapes episodic encoding are still unclear. We investigated how top-down and bottom-up attention contribute to memory encoding of visual objects in humans by manipulating both types of attention during functional magnetic resonance imaging (fMRI) of episodic memory formation. We show that dorsal parietal cortex—specifically, intraparietal sulcus (IPS)—was engaged during top-down attention and was also recruited during the successful formation of episodic memories. By contrast, bottom-up attention engaged ventral parietal cortex—specifically, temporoparietal junction (TPJ)—and was also more active during encoding failure. Functional connectivity analyses revealed further dissociations in how top-down and bottom-up attention influenced encoding: while both IPS and TPJ influenced activity in perceptual cortices thought to represent the information being encoded (fusiform/lateral occipital cortex), they each exerted opposite effects on memory encoding. Specifically, during a preparatory period preceding stimulus presentation, a stronger drive from IPS was associated with a higher likelihood that the subsequently attended stimulus would be encoded. By contrast, during stimulus processing, stronger connectivity with TPJ was associated with a lower likelihood the stimulus would be successfully encoded. These findings suggest that during encoding of visual objects into episodic memory, top-down and bottom-up attention can have opposite influences on perceptual areas that subserve visual object representation, suggesting that one manner in which attention modulates memory is by altering the perceptual processing of to-be-encoded stimuli. PMID:21880922
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Zhang, Zhao
With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less
Brain reserve and cognitive reserve protect against cognitive decline over 4.5 years in MS
Rocca, Maria A.; Leavitt, Victoria M.; Dackovic, Jelena; Mesaros, Sarlota; Drulovic, Jelena; DeLuca, John; Filippi, Massimo
2014-01-01
Objective: Based on the theories of brain reserve and cognitive reserve, we investigated whether larger maximal lifetime brain growth (MLBG) and/or greater lifetime intellectual enrichment protect against cognitive decline over time. Methods: Forty patients with multiple sclerosis (MS) underwent baseline and 4.5-year follow-up evaluations of cognitive efficiency (Symbol Digit Modalities Test, Paced Auditory Serial Addition Task) and memory (Selective Reminding Test, Spatial Recall Test). Baseline and follow-up MRIs quantified disease progression: percentage brain volume change (cerebral atrophy), percentage change in T2 lesion volume. MLBG (brain reserve) was estimated with intracranial volume; intellectual enrichment (cognitive reserve) was estimated with vocabulary. We performed repeated-measures analyses of covariance to investigate whether larger MLBG and/or greater intellectual enrichment moderate/attenuate cognitive decline over time, controlling for disease progression. Results: Patients with MS declined in cognitive efficiency and memory (p < 0.001). MLBG moderated decline in cognitive efficiency (p = 0.031, ηp2 = 0.122), with larger MLBG protecting against decline. MLBG did not moderate memory decline (p = 0.234, ηp2 = 0.039). Intellectual enrichment moderated decline in cognitive efficiency (p = 0.031, ηp2 = 0.126) and memory (p = 0.037, ηp2 = 0.115), with greater intellectual enrichment protecting against decline. MS disease progression was more negatively associated with change in cognitive efficiency and memory among patients with lower vs higher MLBG and intellectual enrichment. Conclusion: We provide longitudinal support for theories of brain reserve and cognitive reserve in MS. Larger MLBG protects against decline in cognitive efficiency, and greater intellectual enrichment protects against decline in cognitive efficiency and memory. Consideration of these protective factors should improve prediction of future cognitive decline in patients with MS. PMID:24748670
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
A system-level approach for embedded memory robustness
NASA Astrophysics Data System (ADS)
Mariani, Riccardo; Boschi, Gabriele
2005-11-01
New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.
A chiral-based magnetic memory device without a permanent magnet
Dor, Oren Ben; Yochelis, Shira; Mathew, Shinto P.; Naaman, Ron; Paltiel, Yossi
2013-01-01
Several technologies are currently in use for computer memory devices. However, there is a need for a universal memory device that has high density, high speed and low power requirements. To this end, various types of magnetic-based technologies with a permanent magnet have been proposed. Recent charge-transfer studies indicate that chiral molecules act as an efficient spin filter. Here we utilize this effect to achieve a proof of concept for a new type of chiral-based magnetic-based Si-compatible universal memory device without a permanent magnet. More specifically, we use spin-selective charge transfer through a self-assembled monolayer of polyalanine to magnetize a Ni layer. This magnitude of magnetization corresponds to applying an external magnetic field of 0.4 T to the Ni layer. The readout is achieved using low currents. The presented technology has the potential to overcome the limitations of other magnetic-based memory technologies to allow fabricating inexpensive, high-density universal memory-on-chip devices. PMID:23922081
A chiral-based magnetic memory device without a permanent magnet.
Ben Dor, Oren; Yochelis, Shira; Mathew, Shinto P; Naaman, Ron; Paltiel, Yossi
2013-01-01
Several technologies are currently in use for computer memory devices. However, there is a need for a universal memory device that has high density, high speed and low power requirements. To this end, various types of magnetic-based technologies with a permanent magnet have been proposed. Recent charge-transfer studies indicate that chiral molecules act as an efficient spin filter. Here we utilize this effect to achieve a proof of concept for a new type of chiral-based magnetic-based Si-compatible universal memory device without a permanent magnet. More specifically, we use spin-selective charge transfer through a self-assembled monolayer of polyalanine to magnetize a Ni layer. This magnitude of magnetization corresponds to applying an external magnetic field of 0.4 T to the Ni layer. The readout is achieved using low currents. The presented technology has the potential to overcome the limitations of other magnetic-based memory technologies to allow fabricating inexpensive, high-density universal memory-on-chip devices.
Using Self-Generated Cues to Facilitate Recall: A Narrative Review
Wheeler, Rebecca L.; Gabbert, Fiona
2017-01-01
We draw upon the Associative Network model of memory, as well as the principles of encoding-retrieval specificity, and cue distinctiveness, to argue that self-generated cue mnemonics offer an intuitive means of facilitating reliable recall of personally experienced events. The use of a self-generated cue mnemonic allows for the spreading activation nature of memory, whilst also presenting an opportunity to capitalize upon cue distinctiveness. Here, we present the theoretical rationale behind the use of this technique, and highlight the distinction between a self-generated cue and a self-referent cue in autobiographical memory research. We contrast this mnemonic with a similar retrieval technique, Mental Reinstatement of Context, which is recognized as the most effective mnemonic component of the Cognitive Interview. Mental Reinstatement of Context is based upon the principle of encoding-retrieval specificity, whereby the overlap between encoded information and retrieval cue predicts the likelihood of accurate recall. However, it does not incorporate the potential additional benefit of self-generated retrieval cues. PMID:29163254
A Memory Efficient Network Encryption Scheme
NASA Astrophysics Data System (ADS)
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
Content-specificity in verbal recall: a randomized controlled study.
Zirk-Sadowski, Jan; Szucs, Denes; Holmes, Joni
2013-01-01
In this controlled experiment we examined whether there are content effects in verbal short-term memory and working memory for verbal stimuli. Thirty-seven participants completed forward and backward digit and letter recall tasks, which were constructed to control for distance effects between stimuli. A maximum-likelihood mixed-effects logistic regression revealed main effects of direction of recall (forward vs backward) and content (digits vs letters). There was an interaction between type of recall and content, in which the recall of digits was superior to the recall of letters in verbal short-term memory but not in verbal working memory. These results demonstrate that the recall of information from verbal short-term memory is content-specific, whilst the recall of information from verbal working memory is content-general.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement
Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong
2016-01-01
Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates. PMID:26763827
A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement.
Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong
2016-01-01
Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates.
Nishiguchi, Shu; Yamada, Minoru; Tanigawa, Takanori; Sekiyama, Kaoru; Kawagoe, Toshikazu; Suzuki, Maki; Yoshikawa, Sakiko; Abe, Nobuhito; Otsuka, Yuki; Nakai, Ryusuke; Aoyama, Tomoki; Tsuboyama, Tadao
2015-07-01
To investigate whether a 12-week physical and cognitive exercise program can improve cognitive function and brain activation efficiency in community-dwelling older adults. Randomized controlled trial. Kyoto, Japan. Community-dwelling older adults (N = 48) were randomized into an exercise group (n = 24) and a control group (n = 24). Exercise group participants received a weekly dual task-based multimodal exercise class in combination with pedometer-based daily walking exercise during the 12-week intervention phase. Control group participants did not receive any intervention and were instructed to spend their time as usual during the intervention phase. The outcome measures were global cognitive function, memory function, executive function, and brain activation (measured using functional magnetic resonance imaging) associated with visual short-term memory. Exercise group participants had significantly greater postintervention improvement in memory and executive functions than the control group (P < .05). In addition, after the intervention, less activation was found in several brain regions associated with visual short-term memory, including the prefrontal cortex, in the exercise group (P < .001, uncorrected). A 12-week physical and cognitive exercise program can improve the efficiency of brain activation during cognitive tasks in older adults, which is associated with improvements in memory and executive function. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Enabling the High Level Synthesis of Data Analytics Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino
Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less
Efficient Exploration of the Space of Reconciled Gene Trees
Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-01-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510
Hardware Implementation of Serially Concatenated PPM Decoder
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael
2009-01-01
A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.
del Río, David; Cuesta, Pablo; Bajo, Ricardo; García-Pacios, Javier; López-Higes, Ramón; del-Pozo, Francisco; Maestú, Fernando
2012-11-01
Inter-individual differences in cognitive performance are based on an efficient use of task-related brain resources. However, little is known yet on how these differences might be reflected on resting-state brain networks. Here we used Magnetoencephalography resting-state recordings to assess the relationship between a behavioral measurement of verbal working memory and functional connectivity as measured through Mutual Information. We studied theta (4-8 Hz), low alpha (8-10 Hz), high alpha (10-13 Hz), low beta (13-18 Hz) and high beta (18-30 Hz) frequency bands. A higher verbal working memory capacity was associated with a lower mutual information in the low alpha band, prominently among right-anterior and left-lateral sensors. The results suggest that an efficient brain organization in the domain of verbal working memory might be related to a lower resting-state functional connectivity across large-scale brain networks possibly involving right prefrontal and left perisylvian areas. Copyright © 2012 Elsevier B.V. All rights reserved.
Evolutionary genetic analyses of MEF2C gene: implications for learning and memory in Homo sapiens.
Kalmady, Sunil V; Venkatasubramanian, Ganesan; Arasappa, Rashmi; Rao, Naren P
2013-02-01
MEF2C facilitates context-dependent fear conditioning (CFC) which is a salient aspect of hippocampus-dependent learning and memory. CFC might have played a crucial role in human evolution because of its advantageous influence on survival of species. In this study, we analyzed 23 orthologous mammalian gene sequences of MEF2C gene to examine the evidence for positive selection on this gene in Homo sapiens using Phylogenetic Analysis by Maximum Likelihood (PAML) and HyPhy software. Both PAML Bayes Empirical Bayes (BEB) and HyPhy Fixed Effects Likelihood (FEL) analyses supported significant positive selection on 4 codon sites in H. sapiens. Also, haplotter analysis revealed significant ongoing positive selection on this gene in Central European population. The study findings suggest that adaptive selective pressure on this gene might have influenced human evolution. Further research on this gene might unravel the potential role of this gene in learning and memory as well as its pathogenetic effect in certain hippocampal disorders with evolutionary basis like schizophrenia. Copyright © 2012 Elsevier B.V. All rights reserved.
2016-01-01
We investigated whether intentional forgetting impacts only the likelihood of later retrieval from long-term memory or whether it also impacts the fidelity of those representations that are successfully retrieved. We accomplished this by combining an item-method directed forgetting task with a testing procedure and modeling approach inspired by the delayed-estimation paradigm used in the study of visual short-term memory (STM). Abstract or concrete colored images were each followed by a remember (R) or forget (F) instruction and sometimes by a visual probe requiring a speeded detection response (E1–E3). Memory was tested using an old–new (E1–E2) or remember-know-no (E3) recognition task followed by a continuous color judgment task (E2–E3); a final experiment included only the color judgment task (E4). Replicating the existing literature, more “old” or “remember” responses were made to R than F items and RTs to postinstruction visual probes were longer following F than R instructions. Color judgments were more accurate for successfully recognized or recollected R than F items (E2–E3); a mixture model confirmed a decrease to both the probability of retrieving the F items as well as the fidelity of the representation of those F items that were retrieved (E4). We conclude that intentional forgetting is an effortful process that not only reduces the likelihood of successfully encoding an item for later retrieval, but also produces an impoverished memory trace even when those items are retrieved; these findings draw a parallel between the control of memory representations within working and long-term memory. PMID:26709589
Fawcett, Jonathan M; Lawrence, Michael A; Taylor, Tracy L
2016-01-01
We investigated whether intentional forgetting impacts only the likelihood of later retrieval from long-term memory or whether it also impacts the fidelity of those representations that are successfully retrieved. We accomplished this by combining an item-method directed forgetting task with a testing procedure and modeling approach inspired by the delayed-estimation paradigm used in the study of visual short-term memory (STM). Abstract or concrete colored images were each followed by a remember (R) or forget (F) instruction and sometimes by a visual probe requiring a speeded detection response (E1-E3). Memory was tested using an old-new (E1-E2) or remember-know-no (E3) recognition task followed by a continuous color judgment task (E2-E3); a final experiment included only the color judgment task (E4). Replicating the existing literature, more "old" or "remember" responses were made to R than F items and RTs to postinstruction visual probes were longer following F than R instructions. Color judgments were more accurate for successfully recognized or recollected R than F items (E2-E3); a mixture model confirmed a decrease to both the probability of retrieving the F items as well as the fidelity of the representation of those F items that were retrieved (E4). We conclude that intentional forgetting is an effortful process that not only reduces the likelihood of successfully encoding an item for later retrieval, but also produces an impoverished memory trace even when those items are retrieved; these findings draw a parallel between the control of memory representations within working and long-term memory. (c) 2015 APA, all rights reserved).
Anatomical Coupling between Distinct Metacognitive Systems for Memory and Visual Perception
McCurdy, Li Yan; Maniscalco, Brian; Metcalfe, Janet; Liu, Ka Yuet; de Lange, Floris P.; Lau, Hakwan
2015-01-01
A recent study found that, across individuals, gray matter volume in the frontal polar region was correlated with visual metacognition capacity (i.e., how well one’s confidence ratings distinguish between correct and incorrect judgments). A question arises as to whether the putative metacognitive mechanisms in this region are also used in other metacognitive tasks involving, for example, memory. A novel psychophysical measure allowed us to assess metacognitive efficiency separately in a visual and a memory task, while taking variations in basic task performance capacity into account. We found that, across individuals, metacognitive efficiencies positively correlated between the two tasks. However, voxel-based morphometry analysis revealed distinct brain structures for the two kinds of metacognition. Replicating a previous finding, variation in visual metacognitive efficiency was correlated with volume of frontal polar regions. However, variation in memory metacognitive efficiency was correlated with volume of the precuneus. There was also a weak correlation between visual metacognitive efficiency and precuneus volume, which may account for the behavioral correlation between visual and memory metacognition (i.e., the precuneus may contain common mechanisms for both types of metacognition). However, we also found that gray matter volumes of the frontal polar and precuneus regions themselves correlated across individuals, and a formal model comparison analysis suggested that this structural covariation was sufficient to account for the behavioral correlation of metacognition in the two tasks. These results highlight the importance of the precuneus in higher-order memory processing and suggest that there may be functionally distinct metacognitive systems in the human brain. PMID:23365229
Cona, Giorgia; Scarpazza, Cristina; Sartori, Giuseppe; Moscovitch, Morris; Bisiacchi, Patrizia Silvia
2015-05-01
Remembering to realize delayed intentions is a multi-phase process, labelled as prospective memory (PM), and involves a plurality of neural networks. The present study utilized the activation likelihood estimation method of meta-analysis to provide a complete overview of the brain regions that are consistently activated in each PM phase. We formulated the 'Attention to Delayed Intention' (AtoDI) model to explain the neural dissociation found between intention maintenance and retrieval phases. The dorsal frontoparietal network is involved mainly in the maintenance phase and seems to mediate the strategic monitoring processes, such as the allocation of top-down attention both towards external stimuli, to monitor for the occurrence of the PM cues, and to internal memory contents, to maintain the intention active in memory. The ventral frontoparietal network is recruited in the retrieval phase and might subserve the bottom-up attention captured externally by the PM cues and, internally, by the intention stored in memory. Together with other brain regions (i.e., insula and posterior cingulate cortex), the ventral frontoparietal network would support the spontaneous retrieval processes. The functional contribution of the anterior prefrontal cortex is discussed extensively for each PM phase. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.
2018-01-01
Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…
Donders, Jacobus; DeWit, Christin
2017-07-01
This study aimed to evaluate the degree to which the Behavior Rating Inventory of Executive Function (BRIEF) and Child Behavior Checklist (CBCL) measure overlapping vs. distinct constructs in pediatric patients with mild traumatic brain injury (TBI), and to examine the demographic and injury correlates of such constructs as well as those of cognitive test performance. A total of 100 parents completed the BRIEF and the CBCL within 1 to 12 months after the injury of their child. Groups were contrasted based on the presence vs. absence of impairment on, respectively, the BRIEF and the CBCL. Exploratory maximum likelihood factor analysis was used to evaluate latent constructs. Correlates of the various factor scores were evaluated through regression analysis and contrasted with those of a test of verbal learning and memory.The results revealed that the BRIEF and the CBCL disagree about the presence vs. absence of impairment in about one quarter of cases. A prior history of attention deficit/hyperactivity disorder (ADHD) was associated with an increased likelihood of impairment on both the BRIEF and the CBCL, whereas prior outpatient psychiatric treatment was associated with the increased likelihood of selective impairment on the CBCL. Latent constructs manifested themselves along cognitive regulation, emotional adjustment and behavioral regulation factors. Whereas premorbid characteristics were the exclusive correlates of these factors, performance on a test of verbal learning and memory was negatively affected by intracranial lesions on neuroimaging.It is concluded that the BRIEF and the CBCL offer complementary and non-redundant information about daily functioning after pediatric mild TBI. The correlates of cognitive test performance and parental behavior ratings after such injuries are different and reflect a divergence between premorbid and injury-related influences.
Wang, Danying; Clouter, Andrew; Chen, Qiaoyu; Shapiro, Kimron L; Hanslmayr, Simon
2018-06-13
Episodic memories are rich in sensory information and often contain integrated information from different sensory modalities. For instance, we can store memories of a recent concert with visual and auditory impressions being integrated in one episode. Theta oscillations have recently been implicated in playing a causal role synchronizing and effectively binding the different modalities together in memory. However, an open question is whether momentary fluctuations in theta synchronization predict the likelihood of associative memory formation for multisensory events. To address this question we entrained the visual and auditory cortex at theta frequency (4 Hz) and in a synchronous or asynchronous manner by modulating the luminance and volume of movies and sounds at 4 Hz, with a phase offset at 0° or 180°. EEG activity from human subjects (both sexes) was recorded while they memorized the association between a movie and a sound. Associative memory performance was significantly enhanced in the 0° compared to the 180° condition. Source-level analysis demonstrated that the physical stimuli effectively entrained their respective cortical areas with a corresponding phase offset. The findings suggested a successful replication of a previous study (Clouter et al., 2017). Importantly, the strength of entrainment during encoding correlated with the efficacy of associative memory such that small phase differences between visual and auditory cortex predicted a high likelihood of correct retrieval in a later recall test. These findings suggest that theta oscillations serve a specific function in the episodic memory system: Binding the contents of different modalities into coherent memory episodes. SIGNIFICANCE STATEMENT How multi-sensory experiences are bound to form a coherent episodic memory representation is one of the fundamental questions in human episodic memory research. Evidence from animal literature suggests that the relative timing between an input and theta oscillations in the hippocampus is crucial for memory formation. We precisely controlled the timing between visual and auditory stimuli and the neural oscillations at 4 Hz using a multisensory entrainment paradigm. Human associative memory formation depends on coincident timing between sensory streams processed by the corresponding brain regions. We provide evidence for a significant role of relative timing of neural theta activity in human episodic memory on a single trial level, which reveals a crucial mechanism underlying human episodic memory. Copyright © 2018 the authors.
An English Vocabulary Learning System Based on Fuzzy Theory and Memory Cycle
NASA Astrophysics Data System (ADS)
Wang, Tzone I.; Chiu, Ti Kai; Huang, Liang Jun; Fu, Ru Xuan; Hsieh, Tung-Cheng
This paper proposes an English Vocabulary Learning System based on the Fuzzy Theory and the Memory Cycle Theory to help a learner to memorize vocabularies easily. By using fuzzy inferences and personal memory cycles, it is possible to find an article that best suits a learner. After reading an article, a quiz is provided for the learner to improve his/her memory of the vocabulary in the article. Early researches use just explicit response (ex. quiz exam) to update memory cycles of newly learned vocabulary; apart from that approach, this paper proposes a methodology that also modify implicitly the memory cycles of learned word. By intensive reading of articles recommended by our approach, a learner learns new words quickly and reviews learned words implicitly as well, and by which the vocabulary ability of the learner improves efficiently.
Rzucidlo, Justyna K; Roseman, Paige L; Laurienti, Paul J; Dagenbach, Dale
2013-01-01
Graph-theory based analyses of resting state functional Magnetic Resonance Imaging (fMRI) data have been used to map the network organization of the brain. While numerous analyses of resting state brain organization exist, many questions remain unexplored. The present study examines the stability of findings based on this approach over repeated resting state and working memory state sessions within the same individuals. This allows assessment of stability of network topology within the same state for both rest and working memory, and between rest and working memory as well. fMRI scans were performed on five participants while at rest and while performing the 2-back working memory task five times each, with task state alternating while they were in the scanner. Voxel-based whole brain network analyses were performed on the resulting data along with analyses of functional connectivity in regions associated with resting state and working memory. Network topology was fairly stable across repeated sessions of the same task, but varied significantly between rest and working memory. In the whole brain analysis, local efficiency, Eloc, differed significantly between rest and working memory. Analyses of network statistics for the precuneus and dorsolateral prefrontal cortex revealed significant differences in degree as a function of task state for both regions and in local efficiency for the precuneus. Conversely, no significant differences were observed across repeated sessions of the same state. These findings suggest that network topology is fairly stable within individuals across time for the same state, but also fluid between states. Whole brain voxel-based network analyses may prove to be a valuable tool for exploring how functional connectivity changes in response to task demands.
LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly.
Bonizzoni, Paola; Vedova, Gianluca Della; Pirola, Yuri; Previtali, Marco; Rizzi, Raffaella
2016-03-01
The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows-Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012 ), a state-of-the-art string graph-based assembler, and uses BEETL for indexing the input data. LSG is open source software and is available online. We have analyzed our implementation on a 875-million read whole-genome dataset, on which LSG has built the string graph using only 1GB of main memory (reducing the memory occupation by a factor of 50 with respect to SGA), while requiring slightly more than twice the time than SGA. The analysis of the entire pipeline shows an important decrease in memory usage, while managing to have only a moderate increase in the running time.
A review of emerging non-volatile memory (NVM) technologies and applications
NASA Astrophysics Data System (ADS)
Chen, An
2016-11-01
This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
Imbo, Ineke; Vandierendonck, André
2007-04-01
The current study tested the development of working memory involvement in children's arithmetic strategy selection and strategy efficiency. To this end, an experiment in which the dual-task method and the choice/no-choice method were combined was administered to 10- to 12-year-olds. Working memory was needed in retrieval, transformation, and counting strategies, but the ratio between available working memory resources and arithmetic task demands changed across development. More frequent retrieval use, more efficient memory retrieval, and more efficient counting processes reduced the working memory requirements. Strategy efficiency and strategy selection were also modified by individual differences such as processing speed, arithmetic skill, gender, and math anxiety. Short-term memory capacity, in contrast, was not related to children's strategy selection or strategy efficiency.
Oyarzún, Javiera P; Morís, Joaquín; Luque, David; de Diego-Balaguer, Ruth; Fuentemilla, Lluís
2017-08-09
System memory consolidation is conceptualized as an active process whereby newly encoded memory representations are strengthened through selective memory reactivation during sleep. However, our learning experience is highly overlapping in content (i.e., shares common elements), and memories of these events are organized in an intricate network of overlapping associated events. It remains to be explored whether and how selective memory reactivation during sleep has an impact on these overlapping memories acquired during awake time. Here, we test in a group of adult women and men the prediction that selective memory reactivation during sleep entails the reactivation of associated events and that this may lead the brain to adaptively regulate whether these associated memories are strengthened or pruned from memory networks on the basis of their relative associative strength with the shared element. Our findings demonstrate the existence of efficient regulatory neural mechanisms governing how complex memory networks are shaped during sleep as a function of their associative memory strength. SIGNIFICANCE STATEMENT Numerous studies have demonstrated that system memory consolidation is an active, selective, and sleep-dependent process in which only subsets of new memories become stabilized through their reactivation. However, the learning experience is highly overlapping in content and thus events are encoded in an intricate network of related memories. It remains to be explored whether and how memory reactivation has an impact on overlapping memories acquired during awake time. Here, we show that sleep memory reactivation promotes strengthening and weakening of overlapping memories based on their associative memory strength. These results suggest the existence of an efficient regulatory neural mechanism that avoids the formation of cluttered memory representation of multiple events and promotes stabilization of complex memory networks. Copyright © 2017 the authors 0270-6474/17/377748-11$15.00/0.
Age-specific effects of voluntary exercise on memory and the older brain.
Siette, Joyce; Westbrook, R Frederick; Cotman, Carl; Sidhu, Kuldip; Zhu, Wanlin; Sachdev, Perminder; Valenzuela, Michael J
2013-03-01
Physical exercise in early adulthood and mid-life improves cognitive function and enhances brain plasticity, but the effects of commencing exercise in late adulthood are not well-understood. We investigated the effects of voluntary exercise in the restoration of place recognition memory in aged rats and examined hippocampal changes of synaptic density and neurogenesis. We found a highly selective age-related deficit in place recognition memory that is stable across retest sessions and correlates strongly with loss of hippocampal synapses. Additionally, 12 weeks of voluntary running at 20 months of age removed the deficit in the hippocampally dependent place recognition memory. Voluntary running restored presynaptic density in the dentate gyrus and CA3 hippocampal subregions in aged rats to levels beyond those observed in younger animals, in which exercise had no functional or synaptic effects. By contrast, hippocampal neurogenesis, a possible memory-related mechanism, increased in both young and aged rats after physical exercise but was not linked with performance in the place recognition task. We used graph-based network analysis based on synaptic covariance patterns to characterize efficient intrahippocampal connectivity. This analysis revealed that voluntary running completely reverses the profound degradation of hippocampal network efficiency that accompanies sedentary aging. Furthermore, at an individual animal level, both overall hippocampal presynaptic density and subregional connectivity independently contribute to prediction of successful place recognition memory performance. Our findings emphasize the unique synaptic effects of exercise on the aged brain and their specific relevance to a hippocampally based memory system for place recognition. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
The efficiency of multimedia learning into old age.
Van Gerven, Pascal W M; Paas, Fred; Van Merriënboer, Jeroen J G; Hendriks, Maaike; Schmidt, Henk G
2003-12-01
On the basis of a multimodal model of working memory, cognitive load theory predicts that a multimedia-based instructional format leads to a better acquisition of complex subject matter than a purely visual instructional format. This study investigated the extent to which age and instructional format had an impact on training efficiency among both young and old adults. It was hypothesised that studying worked examples that are presented as a narrated animation (multimedia condition) is a more efficient means of complex skill training than studying visually presented worked examples (unimodal condition) and solving conventional problems. Furthermore, it was hypothesised that multimedia-based worked examples are especially helpful for elderly learners, who have to deal with a general decline of working-memory resources, because they address both mode-specific working-memory stores. The sample consisted of 60 young (mean age = 15.98 years) and 60 old adults (mean age = 64.48 years). Participants of both age groups were trained in either a conventional, a unimodal, or a multimedia condition. Subsequently, they had to solve a series of test problems. Dependent variables were perceived cognitive load during the training, performance on the test, and efficiency in terms of the ratio between these two variables. Results showed that for both age groups multimedia-based worked examples were more efficient than the other training formats in that less cognitive load led to at least an equal performance level. Although no difference in the beneficial effect of multimedia learning was found between the age groups, multimedia-based instructions seem promising for the elderly.
A fast sequence assembly method based on compressed data structures.
Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu
2014-01-01
Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
The influence of activation level on belief bias in relational reasoning.
Banks, Adrian P
2013-04-01
A novel explanation of belief bias in relational reasoning is presented based on the role of working memory and retrieval in deductive reasoning, and the influence of prior knowledge on this process. It is proposed that belief bias is caused by the believability of a conclusion in working memory which influences its activation level, determining its likelihood of retrieval and therefore its effect on the reasoning process. This theory explores two main influences of belief on the activation levels of these conclusions. First, believable conclusions have higher activation levels and so are more likely to be recalled during the evaluation of reasoning problems than unbelievable conclusions, and therefore, they have a greater influence on the reasoning process. Secondly, prior beliefs about the conclusion have a base level of activation and may be retrieved when logically irrelevant, influencing the evaluation of the problem. The theory of activation and memory is derived from the Atomic Components of Thought-Rational (ACT-R) cognitive architecture and so this account is formalized in an ACT-R cognitive model. Two experiments were conducted to test predictions of this model. Experiment 1 tested strength of belief and Experiment 2 tested the impact of a concurrent working memory load. Both of these manipulations increased the main effect of belief overall and in particular raised belief-based responding in indeterminately invalid problems. These effects support the idea that the activation level of conclusions formed during reasoning influences belief bias. This theory adds to current explanations of belief bias by providing a detailed specification of the role of working memory and how it is influenced by prior knowledge. Copyright © 2012 Cognitive Science Society, Inc.
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Wilker, Sarah; Elbert, Thomas; Kolassa, Iris-Tatjana
2014-07-01
A good memory for emotionally arousing experiences may be intrinsically adaptive, as it helps the organisms to predict safety and danger and to choose appropriate responses to prevent potential harm. However, under conditions of repeated exposure to traumatic stressors, strong emotional memories of these experiences can lead to the development of trauma-related disorders such as posttraumatic stress disorder (PTSD). This syndrome is characterized by distressing intrusive memories that can be so intense that the survivor is unable to discriminate past from present experiences. This selective review on the role of memory-related genes in PTSD etiology is divided in three sections. First, we summarize studies indicating that the likelihood to develop PTSD depends on the cumulative exposure to traumatic stressors and on individual predisposing risk factors, including a substantial genetic contribution to PTSD risk. Second, we focus on memory processes supposed to be involved in PTSD etiology and present evidence for PTSD-associated alterations in both implicit (fear conditioning, fear extinction) and explicit memory for emotional material. This is supplemented by a brief description of structural and functional alterations in memory-relevant brain regions in PTSD. Finally, we summarize a selection of studies indicating that genetic variations found to be associated with enhanced fear conditioning, reduced fear extinction or better episodic memory in human experimental studies can have clinical implications in the case of trauma exposure and influence the risk of PTSD development. Here, we focus on genes involved in noradrenergic (ADRA2B), serotonergic (SLC6A4), and dopaminergic signaling (COMT) as well as in the molecular cascades of memory formation (PRKCA and WWC1). This is supplemented by initial evidence that such memory-related genes might also influence the response rates of exposure-based psychotherapy or pharmacological treatment of PTSD, which underscores the relevance of basic memory research for disorders of altered memory functioning such as PTSD. Copyright © 2014 Elsevier Inc. All rights reserved.
Design of a Variational Multiscale Method for Turbulent Compressible Flows
NASA Technical Reports Server (NTRS)
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.
Stamatakis, Alexandros
2006-11-01
RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Toccalino, Danielle C; Sun, Herie; Sakata, Jon T
2016-01-01
Cognitive processes like the formation of social memories can shape the nature of social interactions between conspecifics. Male songbirds use vocal signals during courtship interactions with females, but the degree to which social memory and familiarity influences the likelihood and structure of male courtship song remains largely unknown. Using a habituation-dishabituation paradigm, we found that a single, brief (<30 s) exposure to a female led to the formation of a short-term memory for that female: adult male Bengalese finches were significantly less likely to produce courtship song to an individual female when re-exposed to her 5 min later (i.e., habituation). Familiarity also rapidly decreased the duration of courtship songs but did not affect other measures of song performance (e.g., song tempo and the stereotypy of syllable structure and sequencing). Consistent with a contribution of social memory to the decrease in courtship song with repeated exposures to the same female, the likelihood that male Bengalese finches produced courtship song increased when they were exposed to a different female (i.e., dishabituation). Three consecutive exposures to individual females also led to the formation of a longer-term memory that persisted over days. Specifically, when courtship song production was assessed 2 days after initial exposures to females, males produced fewer and shorter courtship songs to familiar females than to unfamiliar females. Measures of song performance, however, were not different between courtship songs produced to familiar and unfamiliar females. The formation of a longer-term memory for individual females seemed to require at least three exposures because males did not differentially produce courtship song to unfamiliar females and females that they had been exposed to only once or twice. Taken together, these data indicate that brief exposures to individual females led to the rapid formation and persistence of social memories and support the existence of distinct mechanisms underlying the motivation to produce and the performance of courtship song.
Spaniol, Julia; Davidson, Patrick S R; Kim, Alice S N; Han, Hua; Moscovitch, Morris; Grady, Cheryl L
2009-07-01
The recent surge in event-related fMRI studies of episodic memory has generated a wealth of information about the neural correlates of encoding and retrieval processes. However, interpretation of individual studies is hampered by methodological differences, and by the fact that sample sizes are typically small. We submitted results from studies of episodic memory in healthy young adults, published between 1998 and 2007, to a voxel-wise quantitative meta-analysis using activation likelihood estimation [Laird, A. R., McMillan, K. M., Lancaster, J. L., Kochunov, P., Turkeltaub, P. E., & Pardo, J. V., et al. (2005). A comparison of label-based review and ALE meta-analysis in the stroop task. Human Brain Mapping, 25, 6-21]. We conducted separate meta-analyses for four contrasts of interest: episodic encoding success as measured in the subsequent-memory paradigm (subsequent Hit vs. Miss), episodic retrieval success (Hit vs. Correct Rejection), objective recollection (e.g., Source Hit vs. Item Hit), and subjective recollection (e.g., Remember vs. Know). Concordance maps revealed significant cross-study overlap for each contrast. In each case, the left hemisphere showed greater concordance than the right hemisphere. Both encoding and retrieval success were associated with activation in medial-temporal, prefrontal, and parietal regions. Left ventrolateral prefrontal cortex (PFC) and medial-temporal regions were more strongly involved in encoding, whereas left superior parietal and dorsolateral and anterior PFC regions were more strongly involved in retrieval. Objective recollection was associated with activation in multiple PFC regions, as well as multiple posterior parietal and medial-temporal areas, but not hippocampus. Subjective recollection, in contrast, showed left hippocampal involvement. In summary, these results identify broadly consistent activation patterns associated with episodic encoding and retrieval, and subjective and objective recollection, but also subtle differences among these processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
Knoop-van Campen, Carolien A N; Segers, Eliane; Verhoeven, Ludo
2018-05-01
This study examined the relation between working memory, phonological awareness, and word reading efficiency in fourth-grade children with dyslexia. To test whether the relation between phonological awareness and word reading efficiency differed for children with dyslexia versus typically developing children, we assessed phonological awareness and word reading efficiency in 50 children with dyslexia (aged 9;10, 35 boys) and 613 typically developing children (aged 9;5, 279 boys). Phonological awareness was found to be associated with word reading efficiency, similar for children with dyslexia and typically developing children. To find out whether the relation between working memory and word reading efficiency in the group with dyslexia could be explained by phonological awareness, the children with dyslexia were also tested on working memory. Results of a mediation analysis showed a significant indirect effect of working memory on word reading efficiency via phonological awareness. Working memory predicted reading efficiency, via its relation with phonological awareness in children with dyslexia. This indicates that working memory is necessary for word reading efficiency via its impact on phonological awareness and that phonological awareness continues to be important for word reading efficiency in older children with dyslexia. © 2018 The Authors Dyslexia Published by John Wiley & Sons Ltd.
Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun
2015-08-10
With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.
Progress towards broadband Raman quantum memory in Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Saglamyurek, Erhan; Hrushevskyi, Taras; Smith, Benjamin; Leblanc, Lindsay
2017-04-01
Optical quantum memories are building blocks for quantum information technologies. Efficient and long-lived storage in combination with high-speed (broadband) operation are key features required for practical applications. While the realization has been a great challenge, Raman memory in Bose-Einstein condensates (BECs) is a promising approach, due to negligible decoherence from diffusion and collisions that leads to seconds-scale memory times, high efficiency due to large atomic density, the possibility for atom-chip integration with micro photonics, and the suitability of the far off-resonant Raman approach with storage of broadband photons (over GHz) [5]. Here we report our progress towards Raman memory in a BEC. We describe our apparatus recently built for producing BEC with 87Rb atoms, and present the observation of nearly pure BEC with 5x105 atoms at 40 nK. After showing our initial characterizations, we discuss the suitability of our system for Raman-based light storage in our BEC.
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-25
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogel, Jaron T.; Reboredo, Fernando A.
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
Working memory and inattentional blindness.
Bredemeier, Keith; Simons, Daniel J
2012-04-01
Individual differences in working memory predict many aspects of cognitive performance, especially for tasks that demand focused attention. One negative consequence of focused attention is inattentional blindness, the failure to notice unexpected objects when attention is engaged elsewhere. Yet, the relationship between individual differences in working memory and inattentional blindness is unclear; some studies have found that higher working memory capacity is associated with greater noticing, but others have found no direct association. Given the theoretical and practical significance of such individual differences, more definitive tests are needed. In two studies with large samples, we tested the relationship between multiple working memory measures and inattentional blindness. Individual differences in working memory predicted the ability to perform an attention-demanding tracking task, but did not predict the likelihood of noticing an unexpected object present during the task. We discuss the reasons why we might not expect such individual differences in noticing and why other studies may have found them.
Carbon nanomaterials for non-volatile memories
NASA Astrophysics Data System (ADS)
Ahn, Ethan C.; Wong, H.-S. Philip; Pop, Eric
2018-03-01
Carbon can create various low-dimensional nanostructures with remarkable electronic, optical, mechanical and thermal properties. These features make carbon nanomaterials especially interesting for next-generation memory and storage devices, such as resistive random access memory, phase-change memory, spin-transfer-torque magnetic random access memory and ferroelectric random access memory. Non-volatile memories greatly benefit from the use of carbon nanomaterials in terms of bit density and energy efficiency. In this Review, we discuss sp2-hybridized carbon-based low-dimensional nanostructures, such as fullerene, carbon nanotubes and graphene, in the context of non-volatile memory devices and architectures. Applications of carbon nanomaterials as memory electrodes, interfacial engineering layers, resistive-switching media, and scalable, high-performance memory selectors are investigated. Finally, we compare the different memory technologies in terms of writing energy and time, and highlight major challenges in the manufacturing, integration and understanding of the physical mechanisms and material properties.
Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.
Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei
2017-04-01
There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.
Chen, Baojiang; Qin, Jing
2014-05-10
In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Fukushima, Kikuro; Barnes, Graham R; Ito, Norie; Olley, Peter M; Warabi, Tateo
2014-07-01
Aging affects virtually all functions including sensory/motor and cognitive activities. While retinal image motion is the primary input for smooth-pursuit, its efficiency/accuracy depends on cognitive processes. Elderly subjects exhibit gain decrease during initial and steady-state pursuit, but reports on latencies are conflicting. Using a cue-dependent memory-based smooth-pursuit task, we identified important extra-retinal mechanisms for initial pursuit in young adults including cue information priming and extra-retinal drive components (Ito et al. in Exp Brain Res 229:23-35, 2013). We examined aging effects on parameters for smooth-pursuit using the same tasks. Elderly subjects were tested during three task conditions as previously described: memory-based pursuit, simple ramp-pursuit just to follow motion of a single spot, and popping-out of the correct spot during memory-based pursuit to enhance retinal image motion. Simple ramp-pursuit was used as a task that did not require visual motion working memory. To clarify aging effects, we then compared the results with the previous young subject data. During memory-based pursuit, elderly subjects exhibited normal working memory of cue information. Most movement-parameters including pursuit latencies differed significantly between memory-based pursuit and simple ramp-pursuit and also between young and elderly subjects. Popping-out of the correct spot motion was ineffective for enhancing initial pursuit in elderly subjects. However, the latency difference between memory-based pursuit and simple ramp-pursuit in individual subjects, which includes decision-making delay in the memory task, was similar between the two groups. Our results suggest that smooth-pursuit latencies depend on task conditions and that, although the extra-retinal mechanisms were functional for initial pursuit in elderly subjects, they were less effective.
Schmicker, Marlen; Schwefel, Melanie; Vellage, Anne-Katrin; Müller, Notger G
2016-04-01
Memory training (MT) in older adults with memory deficits often leads to frustration and, therefore, is usually not recommended. Here, we pursued an alternative approach and looked for transfer effects of 1-week attentional filter training (FT) on working memory performance and its neuronal correlates in young healthy humans. The FT effects were compared with pure MT, which lacked the necessity to filter out irrelevant information. Before and after training, all participants performed an fMRI experiment that included a combined task in which stimuli had to be both filtered based on color and stored in memory. We found that training induced processing changes by biasing either filtering or storage. FT induced larger transfer effects on the untrained cognitive function than MT. FT increased neuronal activity in frontal parts of the neuronal gatekeeper network, which is proposed to hinder irrelevant information from being unnecessarily stored in memory. MT decreased neuronal activity in the BG part of the gatekeeper network but enhanced activity in the parietal storage node. We take these findings as evidence that FT renders working memory more efficient by strengthening the BG-prefrontal gatekeeper network. MT, on the other hand, simply stimulates storage of any kind of information. These findings illustrate a tight connection between working memory and attention, and they may open up new avenues for ameliorating memory deficits in patients with cognitive impairments.
Nanophotonic rare-earth quantum memory with optically controlled retrieval
NASA Astrophysics Data System (ADS)
Zhong, Tian; Kindem, Jonathan M.; Bartholomew, John G.; Rochman, Jake; Craiciu, Ioana; Miyazono, Evan; Bettinelli, Marco; Cavalli, Enrico; Verma, Varun; Nam, Sae Woo; Marsili, Francesco; Shaw, Matthew D.; Beyer, Andrew D.; Faraon, Andrei
2017-09-01
Optical quantum memories are essential elements in quantum networks for long-distance distribution of quantum entanglement. Scalable development of quantum network nodes requires on-chip qubit storage functionality with control of the readout time. We demonstrate a high-fidelity nanophotonic quantum memory based on a mesoscopic neodymium ensemble coupled to a photonic crystal cavity. The nanocavity enables >95% spin polarization for efficient initialization of the atomic frequency comb memory and time bin-selective readout through an enhanced optical Stark shift of the comb frequencies. Our solid-state memory is integrable with other chip-scale photon source and detector devices for multiplexed quantum and classical information processing at the network nodes.
Efficient multiuser quantum cryptography network based on entanglement.
Xue, Peng; Wang, Kunkun; Wang, Xiaoping
2017-04-04
We present an efficient quantum key distribution protocol with a certain entangled state to solve a special cryptographic task. Also, we provide a proof of security of this protocol by generalizing the proof of modified of Lo-Chau scheme. Based on this two-user scheme, a quantum cryptography network protocol is proposed without any quantum memory.
Efficient multiuser quantum cryptography network based on entanglement
Xue, Peng; Wang, Kunkun; Wang, Xiaoping
2017-01-01
We present an efficient quantum key distribution protocol with a certain entangled state to solve a special cryptographic task. Also, we provide a proof of security of this protocol by generalizing the proof of modified of Lo-Chau scheme. Based on this two-user scheme, a quantum cryptography network protocol is proposed without any quantum memory. PMID:28374854
Efficient multiuser quantum cryptography network based on entanglement
NASA Astrophysics Data System (ADS)
Xue, Peng; Wang, Kunkun; Wang, Xiaoping
2017-04-01
We present an efficient quantum key distribution protocol with a certain entangled state to solve a special cryptographic task. Also, we provide a proof of security of this protocol by generalizing the proof of modified of Lo-Chau scheme. Based on this two-user scheme, a quantum cryptography network protocol is proposed without any quantum memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
N-back Working Memory Task: Meta-analysis of Normative fMRI Studies With Children.
Yaple, Zachary; Arsalidou, Marie
2018-05-07
The n-back task is likely the most popular measure of working memory for functional magnetic resonance imaging (fMRI) studies. Despite accumulating neuroimaging studies with the n-back task and children, its neural representation is still unclear. fMRI studies that used the n-back were compiled, and data from children up to 15 years (n = 260) were analyzed using activation likelihood estimation. Results show concordance in frontoparietal regions recognized for their role in working memory as well as regions not typically highlighted as part of the working memory network, such as the insula. Findings are discussed in terms of developmental methodology and potential contribution to developmental theories of cognition. © 2018 Society for Research in Child Development.
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
NASA Astrophysics Data System (ADS)
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
NASA Astrophysics Data System (ADS)
Akimov, D. A.; Fedotov, Andrei B.; Koroteev, Nikolai I.; Magnitskii, S. A.; Naumov, A. N.; Sidorov-Biryukov, Dmitri A.; Sokoluk, N. T.; Zheltikov, Alexei M.
1998-04-01
The possibilities of optimizing data writing and reading in devices of 3D optical memory using photochromic materials are discussed. We quantitatively analyze linear and nonlinear optical properties of induline spiropyran molecules, which allows us to estimate the efficiency of using such materials for implementing 3D optical-memory devices. It is demonstrated that, with an appropriate choice of polarization vectors of laser beams, one can considerably improve the efficiency of two-photon writing in photochromic materials. The problem of reading the data stored in a photochromic material is analyzed. The possibilities of data reading methods with the use of fluorescence and four-photon techniques are compared.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
Andre, Julia; Picchioni, Marco; Zhang, Ruibin; Toulopoulou, Timothea
2016-01-01
Working memory ability matures through puberty and early adulthood. Deficits in working memory are linked to the risk of onset of neurodevelopmental disorders such as schizophrenia, and there is a significant temporal overlap between the peak of first episode psychosis risk and working memory maturation. In order to characterize the normal working memory functional maturation process through this critical phase of cognitive development we conducted a systematic review and coordinate based meta-analyses of all the available primary functional magnetic resonance imaging studies (n = 382) that mapped WM function in healthy adolescents (10-17 years) and young adults (18-30 years). Activation Likelihood Estimation analyses across all WM tasks revealed increased activation with increasing subject age in the middle frontal gyrus (BA6) bilaterally, the left middle frontal gyrus (BA10), the left precuneus and left inferior parietal gyri (BA7; 40). Decreased activation with increasing age was found in the right superior frontal (BA8), left junction of postcentral and inferior parietal (BA3/40), and left limbic cingulate gyrus (BA31). These results suggest that brain activation during adolescence increased with age principally in higher order cortices, part of the core working memory network, while reductions were detected in more diffuse and potentially more immature neural networks. Understanding the process by which the brain and its cognitive functions mature through healthy adulthood may provide us with new clues to understanding the vulnerability to neurodevelopmental disorders.
Luck, Tobias; Roehr, Susanne; Rodriguez, Francisca S; Schroeter, Matthias L; Witte, A Veronica; Hinz, Andreas; Mehnert, Anja; Engel, Christoph; Loeffler, Markus; Thiery, Joachim; Villringer, Arno; Riedel-Heller, Steffi G
2018-05-21
Subjectively perceived memory problems (memory-related Subjective Cognitive Symptoms/SCS) can be an indicator of a pre-prodromal or prodromal stage of a neurodegenerative disease such as Alzheimer's disease. We therefore sought to provide detailed empirical information on memory-related SCS in the dementia-free adult population including information on prevalence rates, associated factors and others. We studied 8834 participants (40-79 years) of the population-based LIFE-Adult-Study. Weighted prevalence rates with confidence intervals (95%-CI) were calculated. Associations of memory-related SCS with participants' socio-demographic characteristics, physical and mental comorbidity, and cognitive performance (Verbal Fluency Test Animals, Trail-Making-Test, CERAD Wordlist tests) were analyzed. Prevalence of total memory-related SCS was 53.0% (95%-CI = 51.9-54.0): 26.0% (95%-CI = 25.1-27.0) of the population had a subtype without related concerns, 23.6% (95%-CI = 22.7-24.5) a subtype with some related concerns, and 3.3% (95%-CI = 2.9-3.7) a subtype with strong related concerns. Report of memory-related SCS was unrelated to participants' socio-demographic characteristics, physical comorbidity (except history of stroke), depressive symptomatology, and anxiety. Adults with and without memory-related SCS showed no significant difference in cognitive performance. About one fifth (18.1%) of the participants with memory-related SCS stated that they did consult/want to consult a physician because of their experienced memory problems. Memory-related SCS are very common and unspecific in the non-demented adult population aged 40-79 years. Nonetheless, a substantial proportion of this population has concerns related to experienced memory problems and/or seeks help. Already available information on additional features associated with a higher likelihood of developing dementia in people with SCS may help clinicians to decide who should be monitored more closely.
Imposing constraints on parameter values of a conceptual hydrological model using baseflow response
NASA Astrophysics Data System (ADS)
Dunn, S. M.
Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Constructing Memory, Imagination, and Empathy: A Cognitive Neuroscience Perspective
Gaesser, Brendan
2012-01-01
Studies on memory, imagination, and empathy have largely progressed in isolation. Consequently, humans’ empathic tendencies to care about and help other people are considered independent of our ability to remember and imagine events. Despite this theoretical autonomy, work from across psychology, and neuroscience suggests that these cognitive abilities may be linked. In the present paper, I tentatively propose that humans’ ability to vividly imagine specific events (as supported by constructive memory) may facilitate prosocial intentions and behavior. Evidence of a relationship between memory, imagination, and empathy comes from research that shows imagination influences the perceived and actual likelihood an event occurs, improves intergroup relations, and shares a neural basis with memory and empathy. Although many questions remain, this paper outlines a new direction for research that investigates the role of imagination in promoting empathy and prosocial behavior. PMID:23440064
High performance nonvolatile memory devices based on Cu2-xSe nanowires
NASA Astrophysics Data System (ADS)
Wu, Chun-Yan; Wu, Yi-Liang; Wang, Wen-Jian; Mao, Dun; Yu, Yong-Qiang; Wang, Li; Xu, Jun; Hu, Ji-Gang; Luo, Lin-Bao
2013-11-01
We report on the rational synthesis of one-dimensional Cu2-xSe nanowires (NWs) via a solution method. Electrical analysis of Cu2-xSe NWs based memory device exhibits a stable and reproducible bipolar resistive switching behavior with a low set voltage (0.3-0.6 V), which can enable the device to write and erase data efficiently. Remarkably, the memory device has a record conductance switching ratio of 108, much higher than other devices ever reported. At last, a conducting filaments model is introduced to account for the resistive switching behavior. The totality of this study suggests that the Cu2-xSe NWs are promising building blocks for fabricating high-performance and low-consumption nonvolatile memory devices.
Adult Hippocampal Neurogenesis, Fear Generalization, and Stress
Besnard, Antoine; Sahay, Amar
2016-01-01
The generalization of fear is an adaptive, behavioral, and physiological response to the likelihood of threat in the environment. In contrast, the overgeneralization of fear, a cardinal feature of posttraumatic stress disorder (PTSD), manifests as inappropriate, uncontrollable expression of fear in neutral and safe environments. Overgeneralization of fear stems from impaired discrimination of safe from aversive environments or discernment of unlikely threats from those that are highly probable. In addition, the time-dependent erosion of episodic details of traumatic memories might contribute to their generalization. Understanding the neural mechanisms underlying the overgeneralization of fear will guide development of novel therapeutic strategies to combat PTSD. Here, we conceptualize generalization of fear in terms of resolution of interference between similar memories. We propose a role for a fundamental encoding mechanism, pattern separation, in the dentate gyrus (DG)–CA3 circuit in resolving interference between ambiguous or uncertain threats and in preserving episodic content of remote aversive memories in hippocampal–cortical networks. We invoke cellular-, circuit-, and systems-based mechanisms by which adult-born dentate granule cells (DGCs) modulate pattern separation to influence resolution of interference and maintain precision of remote aversive memories. We discuss evidence for how these mechanisms are affected by stress, a risk factor for PTSD, to increase memory interference and decrease precision. Using this scaffold we ideate strategies to curb overgeneralization of fear in PTSD. PMID:26068726
Ceci, Stephen J; Fitneva, Stanka A; Williams, Wendy M
2010-04-01
Traditional accounts of memory development suggest that maturation of prefrontal cortex (PFC) enables efficient metamemory, which enhances memory. An alternative theory is described, in which changes in early memory and metamemory are mediated by representational changes, independent of PFC maturation. In a pilot study and Experiment 1, younger children failed to recognize previously presented pictures, yet the children could identify the context in which they occurred, suggesting these failures resulted from inefficient metamemory. Older children seldom exhibited such failure. Experiment 2 established that this was not due to retrieval-time recoding. Experiment 3 suggested that young children's representation of a picture's attributes explained their metamemory failure. Experiment 4 demonstrated that metamemory is age-invariant when representational quality is controlled: When stimuli were equivalently represented, age differences in memory and metamemory declined. These findings do not support the traditional view that as children develop, neural maturation permits more efficient monitoring, which leads to improved memory. These findings support a theory based on developmental-representational synthesis, in which constraints on metamemory are independent of neurological development; representational features drive early memory to a greater extent than previously acknowledged, suggesting that neural maturation has been overimputed as a source of early metamemory and memory failure. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Owens, Matthew; Stevenson, Jim; Norgate, Roger; Hadwin, Julie A
2008-10-01
Working memory skills are positively associated with academic performance. In contrast, high levels of trait anxiety are linked with educational underachievement. Based on Eysenck and Calvo's (1992) processing efficiency theory (PET), the present study investigated whether associations between anxiety and educational achievement were mediated via poor working memory performance. Fifty children aged 11-12 years completed verbal (backwards digit span; tapping the phonological store/central executive) and spatial (Corsi blocks; tapping the visuospatial sketchpad/central executive) working memory tasks. Trait anxiety was measured using the State-Trait Anxiety Inventory for Children. Academic performance was assessed using school administered tests of reasoning (Cognitive Abilities Test) and attainment (Standard Assessment Tests). The results showed that the association between trait anxiety and academic performance was significantly mediated by verbal working memory for three of the six academic performance measures (math, quantitative and non-verbal reasoning). Spatial working memory did not significantly mediate the relationship between trait anxiety and academic performance. On average verbal working memory accounted for 51% of the association between trait anxiety and academic performance, while spatial working memory only accounted for 9%. The findings indicate that PET is a useful framework to assess the impact of children's anxiety on educational achievement.
Flash memory management system and method utilizing multiple block list windows
NASA Technical Reports Server (NTRS)
Chow, James (Inventor); Gender, Thomas K. (Inventor)
2005-01-01
The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.
Pyrotechnically Operated Valves for Testing and Flight
NASA Technical Reports Server (NTRS)
Conley, Edgar G.; St.Cyr, William (Technical Monitor)
2002-01-01
Pyrovalves still warrant careful description of their operating characteristics, which is consistent with the NASA mission - to assure that both testing and flight hardware perform with the utmost reliability. So, until the development and qualification of the next generation of remotely controlled valves, in all likelihood based on shape memory alloy technology, pyrovalves will remain ubiquitous in controlling flow systems aloft and will possibly see growing use in ground-based testing facilities. In order to assist NASA in accomplishing this task, we propose a three-phase, three-year testing program. Phase I would set up an experimental facility, a 'test rig' in close cooperation with the staff located at the White Sands Test Facility in Southern New Mexico.
Kokubo, Naomi; Inagaki, Masumi; Gunji, Atsuko; Kobayashi, Tomoka; Ohta, Hidenobu; Kajimoto, Osami; Kaga, Makiko
2012-11-01
The present study aimed to investigate the developmental change in Visuo-Spatial Working Memory (VSWM) in typically developed children using a specially designed Advanced Trail Making Test for children (ATMT-C). We developed a new method for evaluating VSWM efficiency in children using a modified version ATMT to suit their shorter sustained attention. The ATMT-C consists of two parts; a number-based ATMT and a hiragana (Japanese phonogram)-based ATMT, both employing symbols familiar to young children. A total of 94 healthy participants (6-28 years of age) were enrolled in this study. A non-linear developmental change of VSWM efficiency was observed in the results from the ATMT-C. In the number-based ATMT, children under 8 years of age showed a relatively rapid increase in VSWM efficiency while older children (9-12 years) had a more gradual increase in VSWM efficiency. Results from the hiragana-based ATMT-C showed a slightly delayed increase pattern in VSWM efficiency compared to the pattern from the number-based ATMT. There were no significant differences in VSWM efficiency for gender, handedness and test order. VSWM in children gradually matures in a non steady-state manner and there is an important stage for VSWM maturation before reaching 12 years of age. VSWM efficiency may also vary depending on developmental condition of its cognitive subsystems. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
McEvoy, C L; Nelson, D L; Komatsu, T
1999-09-01
Veridical memory for presented list words and false memory for nonpresented but related items were tested using the Deese/Roediger and McDermott paradigm. The strength and density of preexisting connections among the list words, and from the list words to the critical items, were manipulated. The likelihood of producing false memories in free recall varied with the strength of connections from the list words to the critical items but was inversely related to the density of the interconnections among the list words. In contrast, veridical recall of list words was positively related to the density of the interconnections. A final recognition test showed that both false and veridical memories were more likely when the list words were more densely interconnected. The results are discussed in terms of an associative model of memory, Processing Implicit and Explicit Representations (PIER 2) that describes the influence of implicitly activated preexisting information on memory performance.
NASA Astrophysics Data System (ADS)
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-21
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
What Makes a Skilled Writer? Working Memory and Audience Awareness during Text Composition
ERIC Educational Resources Information Center
Alamargot, Denis; Caporossi, Gilles; Chesnet, David; Ros, Christine
2011-01-01
This study investigated the role of working memory capacity as a factor for individual differences in the ability to compose a text with communicative efficiency based on audience awareness. We analyzed its differential effects on the dynamics of the writing processes, as well as on the content of the finished product. Twenty-five graduate…
Towards Modeling False Memory With Computational Knowledge Bases.
Li, Justin; Kohanyi, Emma
2017-01-01
One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
The beneficial effect of testing: an event-related potential study
Bai, Cheng-Hua; Bridger, Emma K.; Zimmer, Hubert D.; Mecklinger, Axel
2015-01-01
The enhanced memory performance for items that are tested as compared to being restudied (the testing effect) is a frequently reported memory phenomenon. According to the episodic context account of the testing effect, this beneficial effect of testing is related to a process which reinstates the previously learnt episodic information. Few studies have explored the neural correlates of this effect at the time point when testing takes place, however. In this study, we utilized the ERP correlates of successful memory encoding to address this issue, hypothesizing that if the benefit of testing is due to retrieval-related processes at test then subsequent memory effects (SMEs) should resemble the ERP correlates of retrieval-based processing in their temporal and spatial characteristics. Participants were asked to learn Swahili-German word pairs before items were presented in either a testing or a restudy condition. Memory performance was assessed immediately and 1-day later with a cued recall task. Successfully recalling items at test increased the likelihood that items were remembered over time compared to items which were only restudied. An ERP subsequent memory contrast (later remembered vs. later forgotten tested items), which reflects the engagement of processes that ensure items are recallable the next day were topographically comparable with the ERP correlate of immediate recollection (immediately remembered vs. immediately forgotten tested items). This result shows that the processes which allow items to be more memorable over time share qualitatively similar neural correlates with the processes that relate to successful retrieval at test. This finding supports the notion that testing is more beneficial than restudying on memory performance over time because of its engagement of retrieval processes, such as the re-encoding of actively retrieved memory representations. PMID:26441577
Beware of Imitators: Al-Qa’ida through the Lens of its Confidential Secretary
2012-06-04
reader; rather in all likelihood are the result of several factors, including Harun’s lapses in memory ; his having to make notes while evading the...they also earned it an indelible place in the collective memory of Washington’s political and intelligence community. In the mind of the then National...a reflective autobiographical style, e.g., the first part narrates his early life in the context of his native homeland, the Comoros Islands, and
Honey Bee Location- and Time-Linked Memory Use in Novel Foraging Situations: Floral Color Dependency
Amaya-Márquez, Marisol; Hill, Peggy S. M.; Abramson, Charles I.; Wells, Harrington
2014-01-01
Learning facilitates behavioral plasticity, leading to higher success rates when foraging. However, memory is of decreasing value with changes brought about by moving to novel resource locations or activity at different times of the day. These premises suggest a foraging model with location- and time-linked memory. Thus, each problem is novel, and selection should favor a maximum likelihood approach to achieve energy maximization results. Alternatively, information is potentially always applicable. This premise suggests a different foraging model, one where initial decisions should be based on previous learning regardless of the foraging site or time. Under this second model, no problem is considered novel, and selection should favor a Bayesian or pseudo-Bayesian approach to achieve energy maximization results. We tested these two models by offering honey bees a learning situation at one location in the morning, where nectar rewards differed between flower colors, and examined their behavior at a second location in the afternoon where rewards did not differ between flower colors. Both blue-yellow and blue-white dimorphic flower patches were used. Information learned in the morning was clearly used in the afternoon at a new foraging site. Memory was not location-time restricted in terms of use when visiting either flower color dimorphism. PMID:26462587
Prevalence of impaired memory in hospitalized adults and associations with in-hospital sleep loss.
Calev, Hila; Spampinato, Lisa M; Press, Valerie G; Meltzer, David O; Arora, Vineet M
2015-07-01
Effective inpatient teaching requires intact patient memory, but studies suggest hospitalized adults may have memory deficits. Sleep loss among inpatients could contribute to memory impairment. To assess memory in older hospitalized adults, and to test the association between sleep quantity, sleep quality, and memory, in order to identify a possible contributor to memory deficits in these patients. Prospective cohort study. General medicine and hematology/oncology inpatient wards. Fifty-nine hospitalized adults at least 50 years of age with no diagnosed sleep disorder. Immediate memory and memory after a 24-hour delay were assessed using a word recall and word recognition task from the University of Southern California Repeatable Episodic Memory Test. A vignette-based memory task was piloted as an alternative test more closely resembling discharge instructions. Sleep duration and efficiency overnight in the hospital were measured using actigraphy. Mean immediate recall was 3.8 words out of 15 (standard deviation = 2.1). Forty-nine percent of subjects had poor memory, defined as immediate recall score of 3 or lower. Median immediate recognition was 11 words out of 15 (interquartile range [IQR] = 9-13). Median delayed recall score was 1 word, and median delayed recognition was 10 words (IQR = 8-12). In-hospital sleep duration and efficiency were not significantly associated with memory. The medical vignette score was correlated with immediate recall (r = 0.49, P < 0.01). About half of the inpatients studied had poor memory while in the hospital, signaling that hospitalization might not be an ideal teachable moment. In-hospital sleep was not associated with memory scores. © 2015 Society of Hospital Medicine.
Prevalence of Impaired Memory in Hospitalized Adults and Associations with In-Hospital Sleep Loss
Calev, Hila; Spampinato, Lisa M; Press, Valerie G; Meltzer, David O; Arora, Vineet M
2015-01-01
Background Effective inpatient teaching requires intact patient memory, but studies suggest hospitalized adults may have memory deficits. Sleep loss among inpatients could contribute to memory impairment. Objective To assess memory in older hospitalized adults, and to test the association between sleep quantity, sleep quality and memory, in order to identify a possible contributor to memory deficits in these patients. Design Prospective cohort study Setting General medicine and hematology/oncology inpatient wards Patients 59 hospitalized adults at least 50 years of age with no diagnosed sleep disorder. Measurements Immediate memory and memory after a 24-hour delay were assessed using a word recall and word recognition task from the University of Southern California Repeatable Episodic Memory Test (USC-REMT). A vignette-based memory task was piloted as an alternative test more closely resembling discharge instructions. Sleep duration and efficiency overnight in the hospital were measured using actigraphy. Results Mean immediate recall was 3.8 words out of 15 (SD=2.1). Forty-nine percent of subjects had poor memory, defined as immediate recall score of 3 or lower. Median immediate recognition was 11 words out of 15 (IQR=9, 13). Median delayed recall score was 1 word and median delayed recognition was 10 words (IQR= 8–12). In-hospital sleep duration and efficiency were not significantly associated with memory. The medical vignette score was correlated with immediate recall (r=0.49, p<0.01) Conclusions About half of inpatients studied had poor memory while in the hospital, signaling that hospitalization might not be an ideal teachable moment. In-hospital sleep was not associated with memory scores. PMID:25872763
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
Evaluating the benefits of risk prevention initiatives
NASA Astrophysics Data System (ADS)
Di Baldassarre, G.
2012-04-01
The likelihood and adverse impacts of water-related disasters, such as floods and landslides, are increasing in many countries because of changes in climate and land-use. This presentation illustrates some preliminary results of a comprehensive demonstration of the benefits of risk prevention measures, carried out within the European FP7 KULTURisk project. The study is performed by using a variety of case studies characterised by diverse socio-economic contexts, different types of water-related hazards (floods, debris flows and landslides, storm surges) and space-time scales. In particular, the benefits of state-of-the-art prevention initiatives, such as early warning systems, non-structural options (e.g. mapping and planning), risk transfer strategies (e.g. insurance policy), and structural measures, are showed. Lastly, the importance of homogenising criteria to create hazard inventories and build memory, efficient risk communication and warning methods as well as active dialogue with and between public and private stakeholders, is highlighted.
Castel, Alan D; Lee, Steve S; Humphreys, Kathryn L; Moore, Amy N
2011-01-01
The ability to select what is important to remember, to attend to this information, and to recall high-value items leads to the efficient use of memory. The present study examined how children with and without attention-deficit/hyperactivity disorder (ADHD) performed on an incentive-based selectivity task in which to-be-remembered items were worth different point values. Participants were 6-9 year old children with ADHD (n = 57) and without ADHD (n = 59). Using a selectivity task, participants studied words paired with point values and were asked to maximize their score, which was the overall value of the items they recalled. This task allows for measures of memory capacity and the ability to selectively remember high-value items. Although there were no significant between-groups differences in the number of words recalled (memory capacity), children with ADHD were less selective than children in the control group in terms of the value of the items they recalled (control of memory). All children recalled more high-value items than low-value items and showed some learning with task experience, but children with ADHD Combined type did not efficiently maximize memory performance (as measured by a selectivity index) relative to children with ADHD Inattentive type and healthy controls, who did not differ significantly from one another. Children with ADHD Combined type exhibit impairments in the strategic and efficient encoding and recall of high-value items. The findings have implications for theories of memory dysfunction in childhood ADHD and the key role of metacognition, cognitive control, and value-directed remembering when considering the strategic use of memory. (c) 2010 APA, all rights reserved
A class of semiparametric cure models with current status data.
Diao, Guoqing; Yuan, Ao
2018-02-08
Current status data occur in many biomedical studies where we only know whether the event of interest occurs before or after a particular time point. In practice, some subjects may never experience the event of interest, i.e., a certain fraction of the population is cured or is not susceptible to the event of interest. We consider a class of semiparametric transformation cure models for current status data with a survival fraction. This class includes both the proportional hazards and the proportional odds cure models as two special cases. We develop efficient likelihood-based estimation and inference procedures. We show that the maximum likelihood estimators for the regression coefficients are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in finite samples. For illustration, we provide an application of the models to a study on the calcification of the hydrogel intraocular lenses.
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
Massively parallel support for a case-based planning system
NASA Technical Reports Server (NTRS)
Kettler, Brian P.; Hendler, James A.; Anderson, William A.
1993-01-01
Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.
Lee, Choong‐Hee; Ryu, Jungwon; Lee, Sang‐Hun; Kim, Hakjin
2016-01-01
ABSTRACT The hippocampus plays critical roles in both object‐based event memory and spatial navigation, but it is largely unknown whether the left and right hippocampi play functionally equivalent roles in these cognitive domains. To examine the hemispheric symmetry of human hippocampal functions, we used an fMRI scanner to measure BOLD activity while subjects performed tasks requiring both object‐based event memory and spatial navigation in a virtual environment. Specifically, the subjects were required to form object‐place paired associate memory after visiting four buildings containing discrete objects in a virtual plus maze. The four buildings were visually identical, and the subjects used distal visual cues (i.e., scenes) to differentiate the buildings. During testing, the subjects were required to identify one of the buildings when cued with a previously associated object, and when shifted to a random place, the subject was expected to navigate to the previously chosen building. We observed that the BOLD activity foci changed from the left hippocampus to the right hippocampus as task demand changed from identifying a previously seen object (object‐cueing period) to searching for its paired‐associate place (object‐cued place recognition period). Furthermore, the efficient retrieval of object‐place paired associate memory (object‐cued place recognition period) was correlated with the BOLD response of the left hippocampus, whereas the efficient retrieval of relatively pure spatial memory (spatial memory period) was correlated with the right hippocampal BOLD response. These findings suggest that the left and right hippocampi in humans might process qualitatively different information for remembering episodic events in space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27009679
Scientific developments of liquid crystal-based optical memory: a review
NASA Astrophysics Data System (ADS)
Prakash, Jai; Chandran, Achu; Biradar, Ashok M.
2017-01-01
The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.
Scientific developments of liquid crystal-based optical memory: a review.
Prakash, Jai; Chandran, Achu; Biradar, Ashok M
2017-01-01
The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donofrio, David
A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.
Rapid Dynamic Assessment of Expertise to Improve the Efficiency of Adaptive Elearning
ERIC Educational Resources Information Center
Kalyuga, Slava; Sweller, John
2005-01-01
In this article we suggest a method of evaluating learner expertise based on assessment of the content of working memory and the extent to which cognitive load has been reduced by knowledge retrieved from long-term memory. The method was tested in an experiment with an elementary algebra tutor using a yoked control design. In the learner-adapted…
Rodgers, J; Buchanan, T; Scholey, A B; Heffernan, T M; Ling, J; Parrott, A C
2003-12-01
Research indicates that the use of recreational drugs, including MDMA ('ecstasy') can result in impairments in cognitive functioning. Recent evidence, based on accounts of 'on drug' effects and cortical binding ratios suggests that women may be more susceptible to the effects of MDMA; however, no research has explored whether there are differences in the long-term behavioural sequelae of the drug between men and women. In addition, little is known about the profile of functioning of the 'typical' user. The present investigation accessed a large sample of recreational drug users, using the Internet, to obtain self-reports of memory functioning with a view to exploring any differences in self-reported ability amongst male and female users, and the level of difficulty reported by the 'typical' ecstasy user. A web site (www.drugresearch.org.uk) was developed and used for data collection. Prospective memory ability was assessed using the Prospective Memory Questionnaire. Self-report of day-to-day memory performance was investigated using the Everyday Memory Questionnaire. The UEL Drug Questionnaire assessed the use of other substances. The number of mistakes made while completing the questionnaires was also taken as an objective measure of performance errors. Findings, based on datasets submitted from 763 respondents, indicate no differences in self-reports of functioning between male and female participants. An overall dissociation between the effects of cannabis and ecstasy on self-reported memory functioning and on the likelihood of making an error during the completion of the questionnaire was found. Typical ecstasy users were found to report significantly more difficulties in long-term prospective memory and to make more completion errors than users of other substances and drug naive controls. Whilst taking into account the fact that participants were recruited via the World Wide Web and that a number of stringent exclusion criteria were applied to the data, a number of conclusions can be drawn. Recreational drug users perceive their memory ability to be impaired compared to non-users. The type of memory difficulties reported varies depending upon the drug of choice. These difficulties are exacerbated in ecstasy users. Individuals reporting average levels of use of ecstasy are more likely to report memory problems than non-ecstasy drug users or drug free individuals. The deleterious effects of ecstasy are therefore not restricted to heavy or chronic users. No gender differences were detected, suggesting that there may be a dissociation between cognitive impairment and cortical binding worthy of further exploration.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
AQUAdexIM: highly efficient in-memory indexing and querying of astronomy time series images
NASA Astrophysics Data System (ADS)
Hong, Zhi; Yu, Ce; Wang, Jie; Xiao, Jian; Cui, Chenzhou; Sun, Jizhou
2016-12-01
Astronomy has always been, and will continue to be, a data-based science, and astronomers nowadays are faced with increasingly massive datasets, one key problem of which is to efficiently retrieve the desired cup of data from the ocean. AQUAdexIM, an innovative spatial indexing and querying method, performs highly efficient on-the-fly queries under users' request to search for Time Series Images from existing observation data on the server side and only return the desired FITS images to users, so users no longer need to download entire datasets to their local machines, which will only become more and more impractical as the data size keeps increasing. Moreover, AQUAdexIM manages to keep a very low storage space overhead and its specially designed in-memory index structure enables it to search for Time Series Images of a given area of the sky 10 times faster than using Redis, a state-of-the-art in-memory database.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A
2008-09-01
The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2013-01-01
The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Hardy, Amy; Young, Kerry; Holmes, Emily A
2009-11-01
A recent study indicated that 94.4% of reported sexual assault cases in the UK do not result in successful legal prosecution, also known as the rate of attrition (Kelly, Lovett, & Regan, 2005). Scant research has examined the role of trauma-related psychological processes in attrition. Victims of sexual assault (N =22) completed questions about peri-traumatic dissociation, trauma memory fragmentation, account incoherence during police interview, and likelihood of proceeding with legal cases. Higher levels of dissociation during sexual assault were associated with participants reporting more fragmented trauma memories. Memory fragmentation was associated with participants indicating that they provided more incoherent accounts of trauma during police interview. Importantly, people who viewed themselves as providing more incoherent accounts predicted that they would be less likely to proceed with their legal cases. The findings suggest trauma impacts on memory, and these trauma-related disruptions to memory may paradoxically contribute to attrition.
Object-Based Attention Overrides Perceptual Load to Modulate Visual Distraction
ERIC Educational Resources Information Center
Cosman, Joshua D.; Vecera, Shaun P.
2012-01-01
The ability to ignore task-irrelevant information and overcome distraction is central to our ability to efficiently carry out a number of tasks. One factor shown to strongly influence distraction is the perceptual load of the task being performed; as the perceptual load of task-relevant information processing increases, the likelihood that…
How to Assess Gaming-Induced Benefits on Attention and Working Memory.
Mishra, Jyoti; Bavelier, Daphne; Gazzaley, Adam
2012-06-01
Our daily actions are driven by our goals in the moment, constantly forcing us to choose among various options. Attention and working memory are key enablers of that process. Attention allows for selective processing of goal-relevant information and rejecting task-irrelevant information. Working memory functions to maintain goal-relevant information in memory for brief periods of time for subsequent recall and/or manipulation. Efficient attention and working memory thus support the best extraction and retention of environmental information for optimal task performance. Recent studies have evidenced that attention and working memory abilities can be enhanced by cognitive training games as well as entertainment videogames. Here we review key cognitive paradigms that have been used to evaluate the impact of game-based training on various aspects of attention and working memory. Common use of such methodology within the scientific community will enable direct comparison of the efficacy of different games across age groups and clinical populations. The availability of common assessment tools will ultimately facilitate development of the most effective forms of game-based training for cognitive rehabilitation and education.
How to Assess Gaming-Induced Benefits on Attention and Working Memory
Mishra, Jyoti; Bavelier, Daphne
2012-01-01
Abstract Our daily actions are driven by our goals in the moment, constantly forcing us to choose among various options. Attention and working memory are key enablers of that process. Attention allows for selective processing of goal-relevant information and rejecting task-irrelevant information. Working memory functions to maintain goal-relevant information in memory for brief periods of time for subsequent recall and/or manipulation. Efficient attention and working memory thus support the best extraction and retention of environmental information for optimal task performance. Recent studies have evidenced that attention and working memory abilities can be enhanced by cognitive training games as well as entertainment videogames. Here we review key cognitive paradigms that have been used to evaluate the impact of game-based training on various aspects of attention and working memory. Common use of such methodology within the scientific community will enable direct comparison of the efficacy of different games across age groups and clinical populations. The availability of common assessment tools will ultimately facilitate development of the most effective forms of game-based training for cognitive rehabilitation and education. PMID:24761314
Toccalino, Danielle C.; Sun, Herie; Sakata, Jon T.
2016-01-01
Cognitive processes like the formation of social memories can shape the nature of social interactions between conspecifics. Male songbirds use vocal signals during courtship interactions with females, but the degree to which social memory and familiarity influences the likelihood and structure of male courtship song remains largely unknown. Using a habituation-dishabituation paradigm, we found that a single, brief (<30 s) exposure to a female led to the formation of a short-term memory for that female: adult male Bengalese finches were significantly less likely to produce courtship song to an individual female when re-exposed to her 5 min later (i.e., habituation). Familiarity also rapidly decreased the duration of courtship songs but did not affect other measures of song performance (e.g., song tempo and the stereotypy of syllable structure and sequencing). Consistent with a contribution of social memory to the decrease in courtship song with repeated exposures to the same female, the likelihood that male Bengalese finches produced courtship song increased when they were exposed to a different female (i.e., dishabituation). Three consecutive exposures to individual females also led to the formation of a longer-term memory that persisted over days. Specifically, when courtship song production was assessed 2 days after initial exposures to females, males produced fewer and shorter courtship songs to familiar females than to unfamiliar females. Measures of song performance, however, were not different between courtship songs produced to familiar and unfamiliar females. The formation of a longer-term memory for individual females seemed to require at least three exposures because males did not differentially produce courtship song to unfamiliar females and females that they had been exposed to only once or twice. Taken together, these data indicate that brief exposures to individual females led to the rapid formation and persistence of social memories and support the existence of distinct mechanisms underlying the motivation to produce and the performance of courtship song. PMID:27378868
Frndak, Seth E; Smerbeck, Audrey M; Irwin, Lauren N; Drake, Allison S; Kordovski, Victoria M; Kunker, Katrina A; Khan, Anjum L; Benedict, Ralph H B
2016-10-01
We endeavored to clarify how distinct co-occurring symptoms relate to the presence of negative work events in employed multiple sclerosis (MS) patients. Latent profile analysis (LPA) was utilized to elucidate common disability patterns by isolating patient subpopulations. Samples of 272 employed MS patients and 209 healthy controls (HC) were administered neuroperformance tests of ambulation, hand dexterity, processing speed, and memory. Regression-based norms were created from the HC sample. LPA identified latent profiles using the regression-based z-scores. Finally, multinomial logistic regression tested for negative work event differences among the latent profiles. Four profiles were identified via LPA: a common profile (55%) characterized by slightly below average performance in all domains, a broadly low-performing profile (18%), a poor motor abilities profile with average cognition (17%), and a generally high-functioning profile (9%). Multinomial regression analysis revealed that the uniformly low-performing profile demonstrated a higher likelihood of reported negative work events. Employed MS patients with co-occurring motor, memory and processing speed impairments were most likely to report a negative work event, classifying them as uniquely at risk for job loss.
Improving Working Memory Efficiency by Reframing Metacognitive Interpretation of Task Difficulty
ERIC Educational Resources Information Center
Autin, Frederique; Croizet, Jean-Claude
2012-01-01
Working memory capacity, our ability to manage incoming information for processing purposes, predicts achievement on a wide range of intellectual abilities. Three randomized experiments (N = 310) tested the effectiveness of a brief psychological intervention designed to boost working memory efficiency (i.e., state working memory capacity) by…
Lee, Young Tack; Kwon, Hyeokjae; Kim, Jin Sung; Kim, Hong-Hee; Lee, Yun Jae; Lim, Jung Ah; Song, Yong-Won; Yi, Yeonjin; Choi, Won-Kook; Hwang, Do Kyung; Im, Seongil
2015-10-27
Two-dimensional van der Waals (2D vdWs) materials are a class of new materials that can provide important resources for future electronics and materials sciences due to their unique physical properties. Among 2D vdWs materials, black phosphorus (BP) has exhibited significant potential for use in electronic and optoelectronic applications because of its allotropic properties, high mobility, and direct and narrow band gap. Here, we demonstrate a few-layered BP-based nonvolatile memory transistor with a poly(vinylidenefluoride-trifluoroethylene) (P(VDF-TrFE)) ferroelectric top gate insulator. Experiments showed that our BP-based ferroelectric transistors operate satisfactorily at room temperature in ambient air and exhibit a clear memory window. Unlike conventional ambipolar BP transistors, our ferroelectric transistors showed only p-type characteristics due to the carbon-fluorine (C-F) dipole effect of the P(VDF-TrFE) layer, as well as the highest linear mobility value of 1159 cm(2) V(-1) s(-1) with a 10(3) on/off current ratio. For more advanced memory applications beyond unit memory devices, we implemented two memory inverter circuits, a resistive-load inverter circuit and a complementary inverter circuit, combined with an n-type molybdenum disulfide (MoS2) nanosheet. Our memory inverter circuits displayed a clear memory window of 15 V and memory output voltage efficiency of 95%.
Sharp-Wave Ripples in Primates Are Enhanced near Remembered Visual Objects.
Leonard, Timothy K; Hoffman, Kari L
2017-01-23
The hippocampus plays an important role in memory for events that are distinct in space and time. One of the strongest, most synchronous neural signals produced by the hippocampus is the sharp-wave ripple (SWR), observed in a variety of mammalian species during offline behaviors, such as slow-wave sleep [1-3] and quiescent waking and pauses in exploration [4-8], leading to long-standing and widespread theories of its contribution to plasticity and memory during these inactive or immobile states [9-14]. Indeed, during sleep and waking inactivity, hippocampal SWRs in rodents appear to support spatial long-term and working memory [4, 15-23], but so far, they have not been linked to memory in primates. More recently, SWRs have been observed during active, visual scene exploration in macaques [24], opening up the possibility that these active-state ripples in the primate hippocampus are linked to memory for objects embedded in scenes. By measuring hippocampal SWRs in macaques during search for scene-contextualized objects, we found that SWR rate increased with repeated presentations. Furthermore, gaze during SWRs was more likely to be near the target object on repeated than on novel presentations, even after accounting for overall differences in gaze location with scene repetition. This proximity bias with repetition occurred near the time of target object detection for remembered targets. The increase in ripple likelihood near remembered visual objects suggests a link between ripples and memory in primates; specifically, SWRs may reflect part of a mechanism supporting the guidance of search based on past experience. Copyright © 2017 Elsevier Ltd. All rights reserved.
Memory and Spin Injection Devices Involving Half Metals
Shaughnessy, M.; Snow, Ryan; Damewood, L.; ...
2011-01-01
We suggest memory and spin injection devices fabricated with half-metallic materials and based on the anomalous Hall effect. Schematic diagrams of the memory chips, in thin film and bulk crystal form, are presented. Spin injection devices made in thin film form are also suggested. These devices do not need any external magnetic field but make use of their own magnetization. Only a gate voltage is needed. The carriers are 100% spin polarized. Memory devices may potentially be smaller, faster, and less volatile than existing ones, and the injection devices may be much smaller and more efficient than existing spin injectionmore » devices.« less
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Fukushima, Kikuro; Ito, Norie; Barnes, Graham R; Onishi, Sachiyo; Kobayashi, Nobuyoshi; Takei, Hidetoshi; Olley, Peter M; Chiba, Susumu; Inoue, Kiyoharu; Warabi, Tateo
2015-01-01
While retinal image motion is the primary input for smooth-pursuit, its efficiency depends on cognitive processes including prediction. Reports are conflicting on impaired prediction during pursuit in Parkinson's disease. By separating two major components of prediction (image motion direction memory and movement preparation) using a memory-based pursuit task, and by comparing tracking eye movements with those during a simple ramp-pursuit task that did not require visual memory, we examined smooth-pursuit in 25 patients with Parkinson's disease and compared the results with 14 age-matched controls. In the memory-based pursuit task, cue 1 indicated visual motion direction, whereas cue 2 instructed the subjects to prepare to pursue or not to pursue. Based on the cue-information memory, subjects were asked to pursue the correct spot from two oppositely moving spots or not to pursue. In 24/25 patients, the cue-information memory was normal, but movement preparation and execution were impaired. Specifically, unlike controls, most of the patients (18/24 = 75%) lacked initial pursuit during the memory task and started tracking the correct spot by saccades. Conversely, during simple ramp-pursuit, most patients (83%) exhibited initial pursuit. Popping-out of the correct spot motion during memory-based pursuit was ineffective for enhancing initial pursuit. The results were similar irrespective of levodopa/dopamine agonist medication. Our results indicate that the extra-retinal mechanisms of most patients are dysfunctional in initiating memory-based (not simple ramp) pursuit. A dysfunctional pursuit loop between frontal eye fields (FEF) and basal ganglia may contribute to the impairment of extra-retinal mechanisms, resulting in deficient pursuit commands from the FEF to brainstem. PMID:25825544
NASA Astrophysics Data System (ADS)
Sleiman, A.; Rosamond, M. C.; Alba Martin, M.; Ayesh, A.; Al Ghaferi, A.; Gallant, A. J.; Mabrook, M. F.; Zeze, D. A.
2012-01-01
A pentacene-based organic metal-insulator-semiconductor memory device, utilizing single walled carbon nanotubes (SWCNTs) for charge storage is reported. SWCNTs were embedded, between SU8 and polymethylmethacrylate to achieve an efficient encapsulation. The devices exhibit capacitance-voltage clockwise hysteresis with a 6 V memory window at ± 30 V sweep voltage, attributed to charging and discharging of SWCNTs. As the applied gate voltage exceeds the SU8 breakdown voltage, charge leakage is induced in SU8 to allow more charges to be stored in the SWCNT nodes. The devices exhibited high storage density (˜9.15 × 1011 cm-2) and demonstrated 94% charge retention due to the superior encapsulation.
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid
NASA Astrophysics Data System (ADS)
Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua
2016-10-01
A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.
Fast Genome-Wide QTL Association Mapping on Pedigree and Population Data.
Zhou, Hua; Blangero, John; Dyer, Thomas D; Chan, Kei-Hang K; Lange, Kenneth; Sobel, Eric M
2017-04-01
Since most analysis software for genome-wide association studies (GWAS) currently exploit only unrelated individuals, there is a need for efficient applications that can handle general pedigree data or mixtures of both population and pedigree data. Even datasets thought to consist of only unrelated individuals may include cryptic relationships that can lead to false positives if not discovered and controlled for. In addition, family designs possess compelling advantages. They are better equipped to detect rare variants, control for population stratification, and facilitate the study of parent-of-origin effects. Pedigrees selected for extreme trait values often segregate a single gene with strong effect. Finally, many pedigrees are available as an important legacy from the era of linkage analysis. Unfortunately, pedigree likelihoods are notoriously hard to compute. In this paper, we reexamine the computational bottlenecks and implement ultra-fast pedigree-based GWAS analysis. Kinship coefficients can either be based on explicitly provided pedigrees or automatically estimated from dense markers. Our strategy (a) works for random sample data, pedigree data, or a mix of both; (b) entails no loss of power; (c) allows for any number of covariate adjustments, including correction for population stratification; (d) allows for testing SNPs under additive, dominant, and recessive models; and (e) accommodates both univariate and multivariate quantitative traits. On a typical personal computer (six CPU cores at 2.67 GHz), analyzing a univariate HDL (high-density lipoprotein) trait from the San Antonio Family Heart Study (935,392 SNPs on 1,388 individuals in 124 pedigrees) takes less than 2 min and 1.5 GB of memory. Complete multivariate QTL analysis of the three time-points of the longitudinal HDL multivariate trait takes less than 5 min and 1.5 GB of memory. The algorithm is implemented as the Ped-GWAS Analysis (Option 29) in the Mendel statistical genetics package, which is freely available for Macintosh, Linux, and Windows platforms from http://genetics.ucla.edu/software/mendel. © 2016 WILEY PERIODICALS, INC.
Eckerström, Marie; Göthlin, Mattias; Rolstad, Sindre; Hessen, Erik; Eckerström, Carl; Nordlund, Arto; Johansson, Boo; Svensson, Johan; Jonsson, Michael; Sacuiu, Simona; Wallin, Anders
2017-01-01
Subjective cognitive decline (SCD) and biomarker-based "at-risk" concepts such as "preclinical" Alzheimer's disease (AD) have been developed to predict AD dementia before objective cognitive impairment is detectable. We longitudinally evaluated cognitive outcome when using these classifications. Memory clinic patients ( n = 235) were classified as SCD ( n = 122): subtle cognitive decline ( n = 36) and mild cognitive impairment ( n = 77) and subsequently subclassified into SCDplus and National Institute on Aging-Alzheimer's Association (NIA-AA) stages 0 to 3. Mean (standard deviation) follow-up time was 48 (35) months. Proportion declining cognitively and prognostic accuracy for cognitive decline was calculated for all classifications. Among SCDplus patients, 43% to 48% declined cognitively. Among NIA-AA stage 1 to 3 patients, 50% to 100% declined cognitively. The highest positive likelihood ratios (+LRs) for subsequent cognitive decline (+LR 6.3), dementia (+LR 3.4), and AD dementia (+LR 6.5) were found for NIA-AA stage 2. In a memory clinic setting, NIA-AA stage 2 seems to be the most successful classification in predicting objective cognitive decline, dementia, and AD dementia.
A new pattern associative memory model for image recognition based on Hebb rules and dot product
NASA Astrophysics Data System (ADS)
Gao, Mingyue; Deng, Limiao; Wang, Yanjiang
2018-04-01
A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.
MSuPDA: A Memory Efficient Algorithm for Sequence Alignment.
Khan, Mohammad Ibrahim; Kamal, Md Sarwar; Chowdhury, Linkon
2016-03-01
Space complexity is a million dollar question in DNA sequence alignments. In this regard, memory saving under pushdown automata can help to reduce the occupied spaces in computer memory. Our proposed process is that anchor seed (AS) will be selected from given data set of nucleotide base pairs for local sequence alignment. Quick splitting techniques will separate the AS from all the DNA genome segments. Selected AS will be placed to pushdown automata's (PDA) input unit. Whole DNA genome segments will be placed into PDA's stack. AS from input unit will be matched with the DNA genome segments from stack of PDA. Match, mismatch and indel of nucleotides will be popped from the stack under the control unit of pushdown automata. During the POP operation on stack, it will free the memory cell occupied by the nucleotide base pair.
Hamilton, Jane E; Desai, Pratikkumar V; Hoot, Nathan R; Gearing, Robin E; Jeong, Shin; Meyer, Thomas D; Soares, Jair C; Begley, Charles E
2016-11-01
Behavioral health-related emergency department (ED) visits have been linked with ED overcrowding, an increased demand on limited resources, and a longer length of stay (LOS) due in part to patients being admitted to the hospital but waiting for an inpatient bed. This study examines factors associated with the likelihood of hospital admission for ED patients with behavioral health conditions at 16 hospital-based EDs in a large urban area in the southern United States. Using Andersen's Behavioral Model of Health Service Use for guidance, the study examined the relationship between predisposing (characteristics of the individual, i.e., age, sex, race/ethnicity), enabling (system or structural factors affecting healthcare access), and need (clinical) factors and the likelihood of hospitalization following ED visits for behavioral health conditions (n = 28,716 ED visits). In the adjusted analysis, a logistic fixed-effects model with blockwise entry was used to estimate the relative importance of predisposing, enabling, and need variables added separately as blocks while controlling for variation in unobserved hospital-specific practices across hospitals and time in years. Significant predisposing factors associated with an increased likelihood of hospitalization following an ED visit included increasing age, while African American race was associated with a lower likelihood of hospitalization. Among enabling factors, arrival by emergency transport and a longer ED LOS were associated with a greater likelihood of hospitalization while being uninsured and the availability of community-based behavioral health services within 5 miles of the ED were associated with lower odds. Among need factors, having a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, dementia, or an impulse control disorder as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization following an ED visit. The block of enabling factors was the strongest predictor of hospitalization following an ED visit compared to predisposing and need factors. Our findings also provide evidence of disparities in hospitalization of the uninsured and racial and ethnic minority patients with ED visits for behavioral health conditions. Thus, improved access to community-based behavioral health services and an increased capacity for inpatient psychiatric hospitals for treating indigent patients may be needed to improve the efficiency of ED services in our region for patients with behavioral health conditions. Among need factors, a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, an impulse control disorder, or dementia as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization following an ED visit, also suggesting an opportunity for improving the efficiency of ED care through the provision of psychiatric services to stabilize and treat patients with serious mental illness. © 2016 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow
NASA Astrophysics Data System (ADS)
Yu, Lei; Xia, Mingliang; Xuan, Li
2013-10-01
The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.
Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding
NASA Astrophysics Data System (ADS)
Dung, Lan-Rong; Lin, Meng-Chun
This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.
Cognitive Control Network Contributions to Memory-Guided Visual Attention
Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.
2016-01-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253
Loewy, Rachel; Fisher, Melissa; Schlosser, Danielle A; Biagianti, Bruno; Stuart, Barbara; Mathalon, Daniel H; Vinogradov, Sophia
2016-07-01
Individuals at clinical high risk (CHR) for psychosis demonstrate cognitive impairments that predict later psychotic transition and real-world functioning. Cognitive training has shown benefits in schizophrenia, but has not yet been adequately tested in the CHR population. In this double-blind randomized controlled trial, CHR individuals (N = 83) were given laptop computers and trained at home on 40 hours of auditory processing-based exercises designed to target verbal learning and memory operations, or on computer games (CG). Participants were assessed with neurocognitive tests based on the Measurement and Treatment Research to Improve Cognition in Schizophrenia initiative (MATRICS) battery and rated on symptoms and functioning. Groups were compared before and after training using a mixed-effects model with restricted maximum likelihood estimation, given the high study attrition rate (42%). Participants in the targeted cognitive training group showed a significant improvement in Verbal Memory compared to CG participants (effect size = 0.61). Positive and Total symptoms improved in both groups over time. CHR individuals showed patterns of training-induced cognitive improvement in verbal memory consistent with prior observations in schizophrenia. This is a particularly vulnerable domain in individuals at-risk for psychosis that predicts later functioning and psychotic transition. Ongoing follow-up of this cohort will assess the durability of training effects in CHR individuals, as well as the potential impact on symptoms and functioning over time. Clinical Trials Number: NCT00655239. URL: https://clinicaltrials.gov/ct2/show/NCT00655239?term=vinogradov&rank=5. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2016.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory
Hollingworth, Andrew; Beck, Valerie M.
2016-01-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681
A Study on the Learning Efficiency of Multimedia-Presented, Computer-Based Science Information
ERIC Educational Resources Information Center
Guan, Ying-Hua
2009-01-01
This study investigated the effects of multimedia presentations on the efficiency of learning scientific information (i.e. information on basic anatomy of human brains and their functions, the definition of cognitive psychology, and the structure of human memory). Experiment 1 investigated whether the modality effect could be observed when the…
LOD-based clustering techniques for efficient large-scale terrain storage and visualization
NASA Astrophysics Data System (ADS)
Bao, Xiaohong; Pajarola, Renato
2003-05-01
Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Context-Dependent Control over Attentional Capture
ERIC Educational Resources Information Center
Cosman, Joshua D.; Vecera, Shaun P.
2013-01-01
A number of studies have demonstrated that the likelihood of a salient item capturing attention is dependent on the "attentional set" an individual employs in a given situation. The instantiation of an attentional set is often viewed as a strategic, voluntary process, relying on working memory systems that represent immediate task…
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
Obstructive sleep apnea exaggerates cognitive dysfunction in stroke patients.
Zhang, Yan; Wang, Wanhua; Cai, Sijie; Sheng, Qi; Pan, Shenggui; Shen, Fang; Tang, Qing; Liu, Yang
2017-05-01
Obstructive sleep apnea (OSA) is very common in stroke survivors. It potentially worsens the cognitive dysfunction and inhibits their functional recovery. However, whether OSA independently damages the cognitive function in stroke patients is unclear. A simple method for evaluating OSA-induced cognitive impairment is also missing. Forty-four stroke patients six weeks after onset and 24 non-stroke patients with snoring were recruited for the polysomnographic study of OSA and sleep architecture. Their cognitive status was evaluated with a validated Chinese version of Cambridge Prospective Memory Test. The relationship between memory deficits and respiratory, sleeping, and dementia-related clinical variables were analyzed with correlation and multiple linear regression tests. OSA significantly and independently damaged time- and event-based prospective memory in stroke patients, although it had less power than the stroke itself. The impairment of prospective memory was correlated with increased apnea-hypopnea index, decreased minimal and mean levels of peripheral oxygen saturation, and disrupted sleeping continuity (reduced sleep efficiency and increased microarousal index). The further regression analysis identified minimal levels of peripheral oxygen saturation and sleep efficiency to be the two most important predictors for the decreased time-based prospective memory in stroke patients. OSA independently contributes to the cognitive dysfunction in stroke patients, potentially through OSA-caused hypoxemia and sleeping discontinuity. The prospective memory test is a simple but sensitive method to detect OSA-induced cognitive impairment in stroke patients. Proper therapies of OSA might improve the cognitive function and increase the life quality of stroke patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Synthesis of energy-efficient FSMs implemented in PLD circuits
NASA Astrophysics Data System (ADS)
Nawrot, Radosław; Kulisz, Józef; Kania, Dariusz
2017-11-01
The paper presents an outline of a simple synthesis method of energy-efficient FSMs. The idea consists in using local clock gating to selectively block the clock signal, if no transition of a state of a memory element is required. The research was dedicated to logic circuits using Programmable Logic Devices as the implementation platform, but the conclusions can be applied to any synchronous circuit. The experimental section reports a comparison of three methods of implementing sequential circuits in PLDs with respect to clock distribution: the classical fully synchronous structure, the structure exploiting the Enable Clock inputs of memory elements, and the structure using clock gating. The results show that the approach based on clock gating is the most efficient one, and it leads to significant reduction of dynamic power consumed by the FSM.
Capture-recapture studies for multiple strata including non-markovian transitions
Brownie, C.; Hines, J.E.; Nichols, J.D.; Pollock, K.H.; Hestbeck, J.B.
1993-01-01
We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reduced parameter versions of this model. We also introduce models that relax the Markovian assumption and allow 'memory' to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we sugg st an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.
Where is the "meta" in animal metacognition?
Kornell, Nate
2014-05-01
Apes, dolphins, and some monkeys seem to have metacognitive abilities: They can accurately evaluate the likelihood that their response in cognitive task was (or will be) correct. These certainty judgments are seen as significant because they imply that animals can evaluate internal cognitive states, which may entail meaningful self-reflection. But little research has investigated what is being reflected upon: Researchers have assumed that when animals make metacognitive judgments they evaluate internal memory strength. Yet decades of research have demonstrated that humans cannot directly evaluate internal memory strength. Instead, they make certainty judgments by drawing inferences from cues they can evaluate, such as familiarity and ease of processing. It seems likely that animals do the same, but this hypothesis has not been tested. I suggest two strategies for investigating the internal cues that underlie animal metacognitive judgments. It is possible that animals, like humans, are capable of making certainty judgments based on internal cues without awareness or meaningful self-reflection. ©2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
On the Suitability of Suffix Arrays for Lempel-Ziv Data Compression
NASA Astrophysics Data System (ADS)
Ferreira, Artur J.; Oliveira, Arlindo L.; Figueiredo, Mário A. T.
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Low latency messages on distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Saltz, Joel
1993-01-01
Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.
Genotype Imputation with Millions of Reference Samples
Browning, Brian L.; Browning, Sharon R.
2016-01-01
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515
Zhang, Qi-Jian; Miao, Shi-Feng; Li, Hua; He, Jing-Hui; Li, Na-Jun; Xu, Qing-Feng; Chen, Dong-Yun; Lu, Jian-Mei
2017-06-19
Small-molecule-based multilevel memory devices have attracted increasing attention because of their advantages, such as super-high storage density, fast reading speed, light weight, low energy consumption, and shock resistance. However, the fabrication of small-molecule-based devices always requires expensive vacuum-deposition techniques or high temperatures for spin-coating. Herein, through rational tailoring of a previous molecule, DPCNCANA (4,4'-(6,6'-bis(2-octyl-1,3-dioxo-2,3-dihydro-1H-benzo[de]isoquinolin-6-yl)-9H,9'H-[3,3'-bicarbazole]-9,9'-diyl)dibenzonitrile), a novel bat-shaped A-D-A-type (A-D-A=acceptor-donor-acceptor) symmetric framework has been successfully synthesized and can be dissolved in common solvents at room temperature. Additionally, it has a low-energy bandgap and dense intramolecular stacking in the film state. The solution-processed memory devices exhibited high-performance nonvolatile multilevel data-storage properties with low switching threshold voltages of about -1.3 and -2.7 V, which is beneficial for low power consumption. Our result should prompt the study of highly efficient solution-processed multilevel memory devices in the field of organic electronics. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Signal and noise extraction from analog memory elements for neuromorphic computing.
Gong, N; Idé, T; Kim, S; Boybat, I; Sebastian, A; Narayanan, V; Ando, T
2018-05-29
Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO 2 -based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge 2 Sb 2 Te 5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
2016-01-01
From the perspective of memory-as-discrimination, whether a cue leads to correct retrieval simultaneously depends on the cue’s relationship to (a) the memory target and (b) the other retrieval candidates. A corollary of the view is that increasing encoding-retrieval match may only help memory if it improves the cue’s capacity to discriminate the target from competitors. Here, age differences in this discrimination process were assessed by manipulating the overlap between cues present at encoding and retrieval orthogonally with cue–target distinctiveness. In Experiment 1, associative memory differences for cue–target sets between young and older adults were minimized through training and retrieval efficiency was assessed through response time. In Experiment 2, age-group differences in associative memory were left to vary and retrieval efficiency was assessed through accuracy. Both experiments showed age-invariance in memory-as-discrimination: cues increasing encoding-retrieval match did not benefit memory unless they also improved discrimination between the target and competitors. Predictions based on the age-related associative deficit were also supported: prior knowledge alleviated age-related associative deficits (Experiment 1), and increasing encoding-retrieval match benefited older more than young adults (Experiment 2). We suggest that the latter occurred because older adults’ associative memory deficits reduced the impact of competing retrieval candidates—hence the age-related benefit was not attributable to encoding-retrieval match per se, but rather it was a joint function of an increased probability of the cue connecting to the target combined with a decrease in competing retrieval candidates. PMID:27831714
Badham, Stephen P; Poirier, Marie; Gandhi, Navina; Hadjivassiliou, Anna; Maylor, Elizabeth A
2016-11-01
From the perspective of memory-as-discrimination, whether a cue leads to correct retrieval simultaneously depends on the cue's relationship to (a) the memory target and (b) the other retrieval candidates. A corollary of the view is that increasing encoding-retrieval match may only help memory if it improves the cue's capacity to discriminate the target from competitors. Here, age differences in this discrimination process were assessed by manipulating the overlap between cues present at encoding and retrieval orthogonally with cue-target distinctiveness. In Experiment 1, associative memory differences for cue-target sets between young and older adults were minimized through training and retrieval efficiency was assessed through response time. In Experiment 2, age-group differences in associative memory were left to vary and retrieval efficiency was assessed through accuracy. Both experiments showed age-invariance in memory-as-discrimination: cues increasing encoding-retrieval match did not benefit memory unless they also improved discrimination between the target and competitors. Predictions based on the age-related associative deficit were also supported: prior knowledge alleviated age-related associative deficits (Experiment 1), and increasing encoding-retrieval match benefited older more than young adults (Experiment 2). We suggest that the latter occurred because older adults' associative memory deficits reduced the impact of competing retrieval candidates-hence the age-related benefit was not attributable to encoding-retrieval match per se, but rather it was a joint function of an increased probability of the cue connecting to the target combined with a decrease in competing retrieval candidates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Protein-Based Three-Dimensional Memories and Associative Processors
NASA Astrophysics Data System (ADS)
Birge, Robert
2008-03-01
The field of bioelectronics has benefited from the fact that nature has often solved problems of a similar nature to those which must be solved to create molecular electronic or photonic devices that operate with efficiency and reliability. Retinal proteins show great promise in bioelectronic devices because they operate with high efficiency (˜0.65%), high cyclicity (>10^7), operate over an extended wavelength range (360 -- 630 nm) and can convert light into changes in voltage, pH, absorption or refractive index. This talk will focus on a retinal protein called bacteriorhodopsin, the proton pump of the organism Halobacterium salinarum. Two memories based on this protein will be described. The first is an optical three-dimensional memory. This memory stores information using volume elements (voxels), and provides as much as a thousand-fold improvement in effective capacity over current technology. A unique branching reaction of a variant of bacteriorhodopsin is used to turn each protein into an optically addressed latched AND gate. Although three working prototypes have been developed, a number of cost/performance and architectural issues must be resolved prior to commercialization. The major issue is that the native protein provides a very inefficient branching reaction. Genetic engineering has improved performance by nearly 500-fold, but a further order of magnitude improvement is needed. Protein-based holographic associative memories will also be discussed. The human brain stores and retrieves information via association, and human intelligence is intimately connected to the nature and enormous capacity of this associative search and retrieval process. To a first order approximation, creativity can be viewed as the association of two seemingly disparate concepts to form a totally new construct. Thus, artificial intelligence requires large scale associative memories. Current computer hardware does not provide an optimal environment for creating artificial intelligence due to the serial nature of random access memories. Software cannot provide a satisfactory work-around that does not introduce unacceptable latency. Holographic associative memories provide a useful approach to large scale associative recall. Bacteriorhodopsin has long been recognized for its outstanding holographic properties, and when utilized in the Paek and Psaltis design, provides a high-speed real-time associative memory with variable thresholding and feedback. What remains is to make an associative memory capable of high-speed association and long-term data storage. The use of directed evolution to create a protein with the necessary unique properties will be discussed.
High-speed noise-free optical quantum memory
NASA Astrophysics Data System (ADS)
Kaczmarek, K. T.; Ledingham, P. M.; Brecht, B.; Thomas, S. E.; Thekkadath, G. S.; Lazo-Arjona, O.; Munns, J. H. D.; Poem, E.; Feizpour, A.; Saunders, D. J.; Nunn, J.; Walmsley, I. A.
2018-04-01
Optical quantum memories are devices that store and recall quantum light and are vital to the realization of future photonic quantum networks. To date, much effort has been put into improving storage times and efficiencies of such devices to enable long-distance communications. However, less attention has been devoted to building quantum memories which add zero noise to the output. Even small additional noise can render the memory classical by destroying the fragile quantum signatures of the stored light. Therefore, noise performance is a critical parameter for all quantum memories. Here we introduce an intrinsically noise-free quantum memory protocol based on two-photon off-resonant cascaded absorption (ORCA). We demonstrate successful storage of GHz-bandwidth heralded single photons in a warm atomic vapor with no added noise, confirmed by the unaltered photon-number statistics upon recall. Our ORCA memory meets the stringent noise requirements for quantum memories while combining high-speed and room-temperature operation with technical simplicity, and therefore is immediately applicable to low-latency quantum networks.
Effect of visual and tactile feedback on kinematic synergies in the grasping hand.
Patel, Vrajeshri; Burns, Martin; Vinjamuri, Ramana
2016-08-01
The human hand uses a combination of feedforward and feedback mechanisms to accomplish high degree of freedom in grasp control efficiently. In this study, we used a synergy-based control model to determine the effect of sensory feedback on kinematic synergies in the grasping hand. Ten subjects performed two types of grasps: one that included feedback (real) and one without feedback (memory-guided), at two different speeds (rapid and natural). Kinematic synergies were extracted from rapid real and rapid memory-guided grasps using principal component analysis. Synergies extracted from memory-guided grasps revealed greater preservation of natural inter-finger relationships than those found in corresponding synergies extracted from real grasps. Reconstruction of natural real and natural memory-guided grasps was used to test performance and generalizability of synergies. A temporal analysis of reconstruction patterns revealed the differing contribution of individual synergies in real grasps versus memory-guided grasps. Finally, the results showed that memory-guided synergies could not reconstruct real grasps as accurately as real synergies could reconstruct memory-guided grasps. These results demonstrate how visual and tactile feedback affects a closed-loop synergy-based motor control system.
Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware
Stöckel, Andreas; Jenzen, Christoph; Thies, Michael; Rückert, Ulrich
2017-01-01
Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output. PMID:28878642
Disfluent presentations lead to the creation of more false memories.
Sanchez, Christopher A; Naylor, Jamie S
2018-01-01
The creation of false memories within the Deese-Roediger-McDermott (DRM) paradigm has been shown to be sensitive to many factors such as task instructions, participant mood, or even presentation modality. However, do other simple perceptual differences also impact performance on the DRM and the creation of false memories? This study explores the potential impact of changes in perceptual disfluency on DRM performance. To test for a potential influence of disfluency on false memory creation, participants viewed lists under either perceptually disfluent conditions or not. Results indicated that disfluency did significantly impact performance in the DRM paradigm; more disfluent presentations significantly increased the recall and recognition of unpresented information, although they did not impact recall or recognition of presented information. Thus, although disfluency did impact performance, disfluency did not produce a positive benefit related to overall task performance. This finding instead suggests that more disfluent presentations can increase the likelihood that false memories are created, and provide little positive performance benefit.
Disfluent presentations lead to the creation of more false memories
Naylor, Jamie S.
2018-01-01
The creation of false memories within the Deese-Roediger-McDermott (DRM) paradigm has been shown to be sensitive to many factors such as task instructions, participant mood, or even presentation modality. However, do other simple perceptual differences also impact performance on the DRM and the creation of false memories? This study explores the potential impact of changes in perceptual disfluency on DRM performance. To test for a potential influence of disfluency on false memory creation, participants viewed lists under either perceptually disfluent conditions or not. Results indicated that disfluency did significantly impact performance in the DRM paradigm; more disfluent presentations significantly increased the recall and recognition of unpresented information, although they did not impact recall or recognition of presented information. Thus, although disfluency did impact performance, disfluency did not produce a positive benefit related to overall task performance. This finding instead suggests that more disfluent presentations can increase the likelihood that false memories are created, and provide little positive performance benefit. PMID:29370255
Del Casale, Antonio; Kotzalidis, Georgios D; Rapinesi, Chiara; Sorice, Serena; Girardi, Nicoletta; Ferracuti, Stefano; Girardi, Paolo
2016-01-01
The nature of the alteration of the response to cognitive tasks in first-episode psychosis (FEP) still awaits clarification. We used activation likelihood estimation, an increasingly used method in evaluating normal and pathological brain function, to identify activation changes in functional magnetic resonance imaging (fMRI) studies of FEP during attentional and memory tasks. We included 11 peer-reviewed fMRI studies assessing FEP patients versus healthy controls (HCs) during performance of attentional and memory tasks. Our database comprised 290 patients with FEP, matched with 316 HCs. Between-group analyses showed that HCs, compared to FEP patients, exhibited hyperactivation of the right middle frontal gyrus (Brodmann area, BA, 9), right inferior parietal lobule (BA 40), and right insula (BA 13) during attentional task performances and hyperactivation of the left insula (BA 13) during memory task performances. Right frontal, parietal, and insular dysfunction during attentional task performance and left insular dysfunction during memory task performance are significant neural functional FEP correlates. © 2016 S. Karger AG, Basel.
Quantum storage of entangled telecom-wavelength photons in an erbium-doped optical fibre
NASA Astrophysics Data System (ADS)
Saglamyurek, Erhan; Jin, Jeongwan; Verma, Varun B.; Shaw, Matthew D.; Marsili, Francesco; Nam, Sae Woo; Oblak, Daniel; Tittel, Wolfgang
2015-02-01
The realization of a future quantum Internet requires the processing and storage of quantum information at local nodes and interconnecting distant nodes using free-space and fibre-optic links. Quantum memories for light are key elements of such quantum networks. However, to date, neither an atomic quantum memory for non-classical states of light operating at a wavelength compatible with standard telecom fibre infrastructure, nor a fibre-based implementation of a quantum memory, has been reported. Here, we demonstrate the storage and faithful recall of the state of a 1,532 nm wavelength photon entangled with a 795 nm photon, in an ensemble of cryogenically cooled erbium ions doped into a 20-m-long silica fibre, using a photon-echo quantum memory protocol. Despite its currently limited efficiency and storage time, our broadband light-matter interface brings fibre-based quantum networks one step closer to reality.
Variability in visual working memory ability limits the efficiency of perceptual decision making.
Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T
2014-04-02
The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.
MSuPDA: A memory efficient algorithm for sequence alignment.
Khan, Mohammad Ibrahim; Kamal, Md Sarwar; Chowdhury, Linkon
2015-01-16
Space complexity is a million dollar question in DNA sequence alignments. In this regards, MSuPDA (Memory Saving under Pushdown Automata) can help to reduce the occupied spaces in computer memory. Our proposed process is that Anchor Seed (AS) will be selected from given data set of Nucleotides base pairs for local sequence alignment. Quick Splitting (QS) techniques will separate the Anchor Seed from all the DNA genome segments. Selected Anchor Seed will be placed to pushdown Automata's (PDA) input unit. Whole DNA genome segments will be placed into PDA's stack. Anchor Seed from input unit will be matched with the DNA genome segments from stack of PDA. Whatever matches, mismatches or Indel, of Nucleotides will be POP from the stack under the control of control unit of Pushdown Automata. During the POP operation on stack it will free the memory cell occupied by the Nucleotide base pair.
Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.
Jeong, Doo Seok; Hwang, Cheol Seong
2018-04-18
Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Undermining belief in false memories leads to less efficient problem-solving behaviour.
Wang, Jianqin; Otgaar, Henry; Howe, Mark L; Smeets, Tom; Merckelbach, Harald; Nahouli, Zacharia
2017-08-01
Memories of events for which the belief in the occurrence of those events is undermined, but recollection is retained, are called nonbelieved memories (NBMs). The present experiments examined the effects of NBMs on subsequent problem-solving behaviour. In Experiment 1, we challenged participants' beliefs in their memories and examined whether NBMs affected subsequent solution rates on insight-based problems. True and false memories were elicited using the Deese/Roediger-McDermott (DRM) paradigm. Then participants' belief in true and false memories was challenged by telling them the item had not been presented. We found that when the challenge led to undermining belief in false memories, fewer problems were solved than when belief was not challenged. In Experiment 2, a similar procedure was used except that some participants solved the problems one week rather than immediately after the feedback. Again, our results showed that undermining belief in false memories resulted in lower problem solution rates. These findings suggest that for false memories, belief is an important agent in whether memories serve as effective primes for immediate and delayed problem-solving.
Efficacy of Code Optimization on Cache-based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important computational algorithms employed at NASA Ames require different programming styles on vector machines and cache-based machines, respectively, neither architecture class appeared to be favored by particular algorithms in principle. Practice tells us that the situation is more complicated. This report presents observations and some analysis of performance tuning for cache-based systems. We point out several counterintuitive results that serve as a cautionary reminder that memory accesses are not the only factors that determine performance, and that within the class of cache-based systems, significant differences exist.
Mood, motivation, and misinformation: aging and affective state influences on memory.
Hess, Thomas M; Popham, Lauren E; Emery, Lisa; Elliott, Tonya
2012-01-01
Normative age differences in memory have typically been attributed to declines in basic cognitive and cortical mechanisms. The present study examined the degree to which dominant everyday affect might also be associated with age-related memory errors using the misinformation paradigm. Younger and older adults viewed a positive and a negative event, and then were exposed to misinformation about each event. Older adults exhibited a higher likelihood than young adults of falsely identifying misinformation as having occurred in the events. Consistent with expectations, strength of the misinformation effect was positively associated with dominant mood, and controlling for mood eliminated any age effects. Also, motivation to engage in complex cognitive activity was negatively associated with susceptibility to misinformation, and susceptibility was stronger for negative than for positive events. We argue that motivational processes underlie all of the observed effects, and that such processes are useful in understanding age differences in memory performance.
Mood, motivation, and misinformation: Aging and affective state influences on memory
Hess, Thomas M.; Popham, Lauren E.; Emery, Lisa; Elliott, Tonya
2014-01-01
Normative age differences in memory have typically been attributed to declines in basic cognitive and cortical mechanisms. The present study examined the degree to which dominant everyday affect might also be associated with age-related memory errors using the misinformation paradigm. Younger and older adults viewed a positive and a negative event, and then were exposed to misinformation about each event. Older adults exhibited a higher likelihood than young adults of falsely identifying misinformation as having occurred in the events. Consistent with expectations, strength of the misinformation effect was positively associated with dominant mood, and controlling for mood eliminated any age effects. Also, motivation to engage in complex cognitive activity was negatively associated with susceptibility to misinformation, and susceptibility was stronger for negative than for positive events. We argue that motivational processes underlie all of the observed effects, and that such processes are useful in understanding age differences in memory performance. PMID:22059441
Investigating the variability of memory distortion for an analogue trauma.
Strange, Deryn; Takarangi, Melanie K T
2015-01-01
In this paper, we examine whether source monitoring (SM) errors might be one mechanism that accounts for traumatic memory distortion. Participants watched a traumatic film with some critical (crux) and non-critical (non-crux) scenes removed. Twenty-four hours later, they completed a memory test. To increase the likelihood participants would notice the film's gaps, we inserted visual static for the length of each missing scene. We then added manipulations designed to affect people's SM behaviour. To encourage systematic SM, before watching the film, we warned half the participants that we had removed some scenes. To encourage heuristic SM some participants also saw labels describing the missing scenes. Adding static highlighting, the missing scenes did not affect false recognition of those missing scenes. However, a warning decreased, while labels increased, participants' false recognition rates. We conclude that manipulations designed to affect SM behaviour also affect the degree of memory distortion in our paradigm.
The Dark Side of Context: Context Reinstatement Can Distort Memory.
Doss, Manoj K; Picart, Jamila K; Gallo, David A
2018-04-01
It is widely assumed that context reinstatement benefits memory, but our experiments revealed that context reinstatement can systematically distort memory. Participants viewed pictures of objects superimposed over scenes, and we later tested their ability to differentiate these old objects from similar new objects. Context reinstatement was manipulated by presenting objects on the reinstated or switched scene at test. Not only did context reinstatement increase correct recognition of old objects, but it also consistently increased incorrect recognition of similar objects as old ones. This false recognition effect was robust, as it was found in several experiments, occurred after both immediate and delayed testing, and persisted with high confidence even after participants were warned to avoid the distorting effects of context. To explain this memory illusion, we propose that context reinstatement increases the likelihood of confusing conceptual and perceptual information, potentially in medial temporal brain regions that integrate this information.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
Owens, Max; Koster, Ernst H W; Derakshan, Nazanin
2013-03-01
Impaired filtering of irrelevant information from working memory is thought to underlie reduced working memory capacity for relevant information in dysphoria. The current study investigated whether training-related gains in working memory performance on the adaptive dual n-back task could result in improved inhibitory function. Efficacy of training was monitored in a change detection paradigm allowing measurement of a sustained event-related potential asymmetry sensitive to working memory capacity and the efficient filtering of irrelevant information. Dysphoric participants in the training group showed training-related gains in working memory that were accompanied by gains in working memory capacity and filtering efficiency compared to an active control group. Results provide important initial evidence that behavioral performance and neural function in dysphoria can be improved by facilitating greater attentional control. Copyright © 2013 Society for Psychophysiological Research.
Molecular implementation of molecular shift register memories
NASA Technical Reports Server (NTRS)
Beratan, David N. (Inventor); Onuchic, Jose N. (Inventor)
1991-01-01
An electronic shift register memory (20) at the molecular level is described. The memory elements are based on a chain of electron transfer molecules (22) and the information is shifted by photoinduced (26) electron transfer reactions. Thus, multi-step sequences of charge transfer reactions are used to move charge with high efficiency down a molecular chain. The device integrates compositions of the invention onto a VLSI substrate (36), providing an example of a molecular electronic device which may be fabricated. Three energy level schemes, molecular implementation of these schemes, optical excitation strategies, charge amplification strategies, and error correction strategies are described.
Lämke, Jörn; Bäurle, Isabel
2017-06-27
Plants frequently have to weather both biotic and abiotic stressors, and have evolved sophisticated adaptation and defense mechanisms. In recent years, chromatin modifications, nucleosome positioning, and DNA methylation have been recognized as important components in these adaptations. Given their potential epigenetic nature, such modifications may provide a mechanistic basis for a stress memory, enabling plants to respond more efficiently to recurring stress or even to prepare their offspring for potential future assaults. In this review, we discuss both the involvement of chromatin in stress responses and the current evidence on somatic, intergenerational, and transgenerational stress memory.
Nash, Robert A; Wade, Kimberley A; Garry, Maryanne; Adelman, James S
2017-08-01
People depend on various sources of information when trying to verify their autobiographical memories. Yet recent research shows that people prefer to use cheap-and-easy verification strategies, even when these strategies are not reliable. We examined the robustness of this cheap strategy bias, with scenarios designed to encourage greater emphasis on source reliability. In three experiments, subjects described real (Experiments 1 and 2) or hypothetical (Experiment 3) autobiographical events, and proposed strategies they might use to verify their memories of those events. Subjects also rated the reliability, cost, and the likelihood that they would use each strategy. In line with previous work, we found that the preference for cheap information held when people described how they would verify childhood or recent memories (Experiment 1), personally important or trivial memories (Experiment 2), and even when the consequences of relying on incorrect information could be significant (Experiment 3). Taken together, our findings fit with an account of source monitoring in which the tendency to trust one's own autobiographical memories can discourage people from systematically testing or accepting strong disconfirmatory evidence.
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Weight and See: Loading Working Memory Improves Incidental Identification of Irrelevant Faces
Carmel, David; Fairnie, Jake; Lavie, Nilli
2012-01-01
Are task-irrelevant stimuli processed to a level enabling individual identification? This question is central both for perceptual processing models and for applied settings (e.g., eye-witness testimony). Lavie’s load theory proposes that working memory actively maintains attentional prioritization of relevant over irrelevant information. Loading working memory thus impairs attentional prioritization, leading to increased processing of task-irrelevant stimuli. Previous research has shown that increased working memory load leads to greater interference effects from response-competing distractors. Here we test the novel prediction that increased processing of irrelevant stimuli under high working memory load should lead to a greater likelihood of incidental identification of entirely irrelevant stimuli. To test this, we asked participants to perform a word-categorization task while ignoring task-irrelevant images. The categorization task was performed during the retention interval of a working memory task with either low or high load (defined by memory set size). Following the final experimental trial, a surprise question assessed incidental identification of the irrelevant image. Loading working memory was found to improve identification of task-irrelevant faces, but not of building stimuli (shown in a separate experiment to be less distracting). These findings suggest that working memory plays a critical role in determining whether distracting stimuli will be subsequently identified. PMID:22912623
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
NASA Astrophysics Data System (ADS)
Galiatsatos, P. G.; Tennyson, J.
2012-11-01
The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.
An authenticated image encryption scheme based on chaotic maps and memory cellular automata
NASA Astrophysics Data System (ADS)
Bakhshandeh, Atieh; Eslami, Ziba
2013-06-01
This paper introduces a new image encryption scheme based on chaotic maps, cellular automata and permutation-diffusion architecture. In the permutation phase, a piecewise linear chaotic map is utilized to confuse the plain-image and in the diffusion phase, we employ the Logistic map as well as a reversible memory cellular automata to obtain an efficient and secure cryptosystem. The proposed method admits advantages such as highly secure diffusion mechanism, computational efficiency and ease of implementation. A novel property of the proposed scheme is its authentication ability which can detect whether the image is tampered during the transmission or not. This is particularly important in applications where image data or part of it contains highly sensitive information. Results of various analyses manifest high security of this new method and its capability for practical image encryption.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Does constraining memory maintenance reduce visual search efficiency?
Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R
2018-03-01
We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.
In-memory interconnect protocol configuration registers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Kevin Y.; Roberts, David A.
Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mappingmore » decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.« less
"Shape function + memory mechanism"-based hysteresis modeling of magnetorheological fluid actuators
NASA Astrophysics Data System (ADS)
Qian, Li-Jun; Chen, Peng; Cai, Fei-Long; Bai, Xian-Xu
2018-03-01
A hysteresis model based on "shape function + memory mechanism" is presented and its feasibility is verified through modeling the hysteresis behavior of a magnetorheological (MR) damper. A hysteresis phenomenon in resistor-capacitor (RC) circuit is first presented and analyzed. In the hysteresis model, the "memory mechanism" originating from the charging and discharging processes of the RC circuit is constructed by adopting a virtual displacement variable and updating laws for the reference points. The "shape function" is achieved and generalized from analytical solutions of the simple semi-linear Duhem model. Using the approach, the memory mechanism reveals the essence of specific Duhem model and the general shape function provides a direct and clear means to fit the hysteresis loop. In the frame of the structure of a "Restructured phenomenological model", the original hysteresis operator, i.e., the Bouc-Wen operator, is replaced with the new hysteresis operator. The comparative work with the Bouc-Wen operator based model demonstrates superior performances of high computational efficiency and comparable accuracy of the new hysteresis operator-based model.
Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei
2018-01-01
In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A compact superconducting nanowire memory element operated by nanowire cryotrons
NASA Astrophysics Data System (ADS)
Zhao, Qing-Yuan; Toomey, Emily A.; Butters, Brenden A.; McCaughan, Adam N.; Dane, Andrew E.; Nam, Sae-Woo; Berggren, Karl K.
2018-07-01
A superconducting loop stores persistent current without any ohmic loss, making it an ideal platform for energy efficient memories. Conventional superconducting memories use an architecture based on Josephson junctions (JJs) and have demonstrated access times less than 10 ps and power dissipation as low as 10-19 J. However, their scalability has been slow to develop due to the challenges in reducing the dimensions of JJs and minimizing the area of the superconducting loops. In addition to the memory itself, complex readout circuits require additional JJs and inductors for coupling signals, increasing the overall area. Here, we have demonstrated a superconducting memory based solely on lithographic nanowires. The small dimensions of the nanowire ensure that the device can be fabricated in a dense area in multiple layers, while the high kinetic inductance makes the loop essentially independent of geometric inductance, allowing it to be scaled down without sacrificing performance. The memory is operated by a group of nanowire cryotrons patterned alongside the storage loop, enabling us to reduce the entire memory cell to 3 μm × 7 μm in our proof-of-concept device. In this work we present the operation principles of a superconducting nanowire memory (nMem) and characterize its bit error rate, speed, and power dissipation.
Benoit, Roland G.; Schacter, Daniel L.
2015-01-01
It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of core network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the lateral temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network’s nodes as wells as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions’ specialized contributions and interactions. PMID:26142352
Sequential dynamics in visual short-term memory.
Kool, Wouter; Conway, Andrew R A; Turk-Browne, Nicholas B
2014-10-01
Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects.
Regulation of memory accuracy with multiple answers: the plurality option.
Luna, Karlos; Higham, Philip A; Martín-Luengo, Beatriz
2011-06-01
We report two experiments that investigated the regulation of memory accuracy with a new regulatory mechanism: the plurality option. This mechanism is closely related to the grain-size option but involves control over the number of alternatives contained in an answer rather than the quantitative boundaries of a single answer. Participants were presented with a slideshow depicting a robbery (Experiment 1) or a murder (Experiment 2), and their memory was tested with five-alternative multiple-choice questions. For each question, participants were asked to generate two answers: a single answer consisting of one alternative and a plural answer consisting of the single answer and two other alternatives. Each answer was rated for confidence (Experiment 1) or for the likelihood of being correct (Experiment 2), and one of the answers was selected for reporting. Results showed that participants used the plurality option to regulate accuracy, selecting single answers when their accuracy and confidence were high, but opting for plural answers when they were low. Although accuracy was higher for selected plural than for selected single answers, the opposite pattern was evident for confidence or likelihood ratings. This dissociation between confidence and accuracy for selected answers was the result of marked overconfidence in single answers coupled with underconfidence in plural answers. We hypothesize that these results can be attributed to overly dichotomous metacognitive beliefs about personal knowledge states that cause subjective confidence to be extreme.
Sequential dynamics in visual short-term memory
Conway, Andrew R. A.; Turk-Browne, Nicholas B.
2014-01-01
Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects. PMID:25228092
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Light-quark and gluon jet discrimination in pp collisions at √s = 7 TeV with the ATLAS detector
Aad, G.; Abbott, B.; Abdallah, J.; ...
2014-08-21
A likelihood-based discriminant for the identification of quark- and gluon-initiated jets is built and validated using 4.7 fb -1 of proton–proton collision data at √s = 7 TeV collected with the ATLAS detector at the LHC. Data samples with enriched quark or gluon content are used in the construction and validation of templates of jet properties that are the input to the likelihood-based discriminant. The discriminating power of the jet tagger is established in both data and Monte Carlo samples within a systematic uncertainty of ≈ 10–20 %. In data, light-quark jets can be tagged with an efficiency of ≈more » 50% while achieving a gluon-jet mis-tag rate of ≈ 25% in a p T range between 40 GeV and 360 GeV for jets in the acceptance of the tracker. The rejection of gluon-jets found in the data is significantly below what is attainable using a Pythia 6 Monte Carlo simulation, where gluon-jet mis-tag rates of 10 % can be reached for a 50 % selection efficiency of light-quark jets using the same jet properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugam, Kamesh
Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less
Effects of Computer Cognitive Training on Depression in Cognitively Impaired Seniors
ERIC Educational Resources Information Center
Allen, Nara L.
2016-01-01
The aim of the present study was to investigate the effects of a computer cognitive training program on depression levels in older mildly cognitive impaired individuals. Peterson et al. (1999), defines mild cognitive impairment (MCI) as a transitional stage in which an individual's memory deteriorates and his likelihood of developing Alzheimer's…
Diagnostic reasoning strategies and diagnostic success.
Coderre, S; Mandin, H; Harasym, P H; Fick, G H
2003-08-01
Cognitive psychology research supports the notion that experts use mental frameworks or "schemes", both to organize knowledge in memory and to solve clinical problems. The central purpose of this study was to determine the relationship between problem-solving strategies and the likelihood of diagnostic success. Think-aloud protocols were collected to determine the diagnostic reasoning used by experts and non-experts when attempting to diagnose clinical presentations in gastroenterology. Using logistic regression analysis, the study found that there is a relationship between diagnostic reasoning strategy and the likelihood of diagnostic success. Compared to hypothetico-deductive reasoning, the odds of diagnostic success were significantly greater when subjects used the diagnostic strategies of pattern recognition and scheme-inductive reasoning. Two other factors emerged as independent determinants of diagnostic success: expertise and clinical presentation. Not surprisingly, experts outperformed novices, while the content area of the clinical cases in each of the four clinical presentations demonstrated varying degrees of difficulty and thus diagnostic success. These findings have significant implications for medical educators. It supports the introduction of "schemes" as a means of enhancing memory organization and improving diagnostic success.
Memory-based attention capture when multiple items are maintained in visual working memory.
Hollingworth, Andrew; Beck, Valerie M
2016-07-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search, an index of VWM guidance, is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when 2 colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
Fukushima, Kikuro; Ito, Norie; Barnes, Graham R; Onishi, Sachiyo; Kobayashi, Nobuyoshi; Takei, Hidetoshi; Olley, Peter M; Chiba, Susumu; Inoue, Kiyoharu; Warabi, Tateo
2015-03-01
While retinal image motion is the primary input for smooth-pursuit, its efficiency depends on cognitive processes including prediction. Reports are conflicting on impaired prediction during pursuit in Parkinson's disease. By separating two major components of prediction (image motion direction memory and movement preparation) using a memory-based pursuit task, and by comparing tracking eye movements with those during a simple ramp-pursuit task that did not require visual memory, we examined smooth-pursuit in 25 patients with Parkinson's disease and compared the results with 14 age-matched controls. In the memory-based pursuit task, cue 1 indicated visual motion direction, whereas cue 2 instructed the subjects to prepare to pursue or not to pursue. Based on the cue-information memory, subjects were asked to pursue the correct spot from two oppositely moving spots or not to pursue. In 24/25 patients, the cue-information memory was normal, but movement preparation and execution were impaired. Specifically, unlike controls, most of the patients (18/24 = 75%) lacked initial pursuit during the memory task and started tracking the correct spot by saccades. Conversely, during simple ramp-pursuit, most patients (83%) exhibited initial pursuit. Popping-out of the correct spot motion during memory-based pursuit was ineffective for enhancing initial pursuit. The results were similar irrespective of levodopa/dopamine agonist medication. Our results indicate that the extra-retinal mechanisms of most patients are dysfunctional in initiating memory-based (not simple ramp) pursuit. A dysfunctional pursuit loop between frontal eye fields (FEF) and basal ganglia may contribute to the impairment of extra-retinal mechanisms, resulting in deficient pursuit commands from the FEF to brainstem. © 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems
NASA Astrophysics Data System (ADS)
Pawlicki, Ted
1988-03-01
Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.
Savin, Cristina; Dayan, Peter; Lengyel, Máté
2014-01-01
A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments. PMID:24586137
Nanophotonic rare-earth quantum memory with optically controlled retrieval.
Zhong, Tian; Kindem, Jonathan M; Bartholomew, John G; Rochman, Jake; Craiciu, Ioana; Miyazono, Evan; Bettinelli, Marco; Cavalli, Enrico; Verma, Varun; Nam, Sae Woo; Marsili, Francesco; Shaw, Matthew D; Beyer, Andrew D; Faraon, Andrei
2017-09-29
Optical quantum memories are essential elements in quantum networks for long-distance distribution of quantum entanglement. Scalable development of quantum network nodes requires on-chip qubit storage functionality with control of the readout time. We demonstrate a high-fidelity nanophotonic quantum memory based on a mesoscopic neodymium ensemble coupled to a photonic crystal cavity. The nanocavity enables >95% spin polarization for efficient initialization of the atomic frequency comb memory and time bin-selective readout through an enhanced optical Stark shift of the comb frequencies. Our solid-state memory is integrable with other chip-scale photon source and detector devices for multiplexed quantum and classical information processing at the network nodes. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Jin, Miaomiao; Cheng, Long; Li, Yi; Hu, Siyu; Lu, Ke; Chen, Jia; Duan, Nian; Wang, Zhuorui; Zhou, Yaxiong; Chang, Ting-Chang; Miao, Xiangshui
2018-06-27
Owing to the capability of integrating the information storage and computing in the same physical location, in-memory computing with memristors has become a research hotspot as a promising route for non von Neumann architecture. However, it is still a challenge to develop high performance devices as well as optimized logic methodologies to realize energy-efficient computing. Herein, filamentary Cu/GeTe/TiN memristor is reported to show satisfactory properties with nanosecond switching speed (< 60 ns), low voltage operation (< 2 V), high endurance (>104 cycles) and good retention (>104 s @85℃). It is revealed that the charge carrier conduction mechanisms in high resistance and low resistance states are Schottky emission and hopping transport between the adjacent Cu clusters, respectively, based on the analysis of current-voltage behaviors and resistance-temperature characteristics. An intuitive picture is given to describe the dynamic processes of resistive switching. Moreover, based on the basic material implication (IMP) logic circuit, we proposed a reconfigurable logic method and experimentally implemented IMP, NOT, OR, and COPY logic functions. Design of a one-bit full adder with reduction in computational sequences and its validation in simulation further demonstrate the potential practical application. The results provide important progress towards understanding of resistive switching mechanism and realization of energy-efficient in-memory computing architecture. © 2018 IOP Publishing Ltd.
Brébion, Gildas; Bressan, Rodrigo A; Ohlsen, Ruth I; David, Anthony S
2013-12-01
Memory impairments in patients with schizophrenia have been associated with various cognitive and clinical factors. Hallucinations have been more specifically associated with errors stemming from source monitoring failure. We conducted a broad investigation of verbal memory and visual memory as well as source memory functioning in a sample of patients with schizophrenia. Various memory measures were tallied, and we studied their associations with processing speed, working memory span, and positive, negative, and depressive symptoms. Superficial and deep memory processes were differentially associated with processing speed, working memory span, avolition, depression, and attention disorders. Auditory/verbal and visual hallucinations were differentially associated with specific types of source memory error. We integrated all the results into a revised version of a previously published model of memory functioning in schizophrenia. The model describes the factors that affect memory efficiency, as well as the cognitive underpinnings of hallucinations within the source monitoring framework. © 2013.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
CscoreTool: fast Hi-C compartment analysis at high resolution.
Zheng, Xiaobin; Zheng, Yixian
2018-05-01
The genome-wide chromosome conformation capture (Hi-C) has revealed that the eukaryotic genome can be partitioned into A and B compartments that have distinctive chromatin and transcription features. Current Principle Component Analyses (PCA)-based method for the A/B compartment prediction based on Hi-C data requires substantial CPU time and memory. We report the development of a method, CscoreTool, which enables fast and memory-efficient determination of A/B compartments at high resolution even in datasets with low sequencing depth. https://github.com/scoutzxb/CscoreTool. xzheng@carnegiescience.edu. Supplementary data are available at Bioinformatics online.
Low-Density Parity-Check Code Design Techniques to Simplify Encoding
NASA Astrophysics Data System (ADS)
Perez, J. M.; Andrews, K.
2007-11-01
This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.
Qin, Zhongyuan; Zhang, Xinshuai; Feng, Kerong; Zhang, Qunfang; Huang, Jie
2014-01-01
With the rapid development and widespread adoption of wireless sensor networks (WSNs), security has become an increasingly prominent problem. How to establish a session key in node communication is a challenging task for WSNs. Considering the limitations in WSNs, such as low computing capacity, small memory, power supply limitations and price, we propose an efficient identity-based key management (IBKM) scheme, which exploits the Bloom filter to authenticate the communication sensor node with storage efficiency. The security analysis shows that IBKM can prevent several attacks effectively with acceptable computation and communication overhead. PMID:25264955
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
The Optimization of In-Memory Space Partitioning Trees for Cache Utilization
NASA Astrophysics Data System (ADS)
Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo
In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.
Primary Care-Based Memory Clinics: Expanding Capacity for Dementia Care.
Lee, Linda; Hillier, Loretta M; Heckman, George; Gagnon, Micheline; Borrie, Michael J; Stolee, Paul; Harvey, David
2014-09-01
The implementation in Ontario of 15 primary-care-based interprofessional memory clinics represented a unique model of team-based case management aimed at increasing capacity for dementia care at the primary-care level. Each clinic tracked referrals; in a subset of clinics, charts were audited by geriatricians, clinic members were interviewed, and patients, caregivers, and referring physicians completed satisfaction surveys. Across all clinics, 582 patients were assessed, and 8.9 per cent were referred to a specialist. Patients and caregivers were very satisfied with the care received, as were referring family physicians, who reported increased capacity to manage dementia. Geriatricians' chart audits revealed a high level of agreement with diagnosis and management. This study demonstrated acceptability, feasibility, and preliminary effectiveness of the primary-care memory clinic model. Led by specially trained family physicians, it provided timely access to high-quality collaborative dementia care, impacting health service utilization by more-efficient use of scarce geriatric specialist resources.
Defize, Thomas; Riva, Raphaël; Thomassin, Jean-Michel; Alexandre, Michaël; Herck, Niels Van; Prez, Filip Du; Jérôme, Christine
2017-01-01
A chemically cross-linked but remarkably (re)processable shape-memory polymer (SMP) is designed by cross-linking poly(ε-caprolactone) (PCL) stars via the efficient triazolinedione click chemistry, based on the very fast and reversible Alder-ene reaction of 1,2,4-triazoline-3,5-dione (TAD) with indole compounds. Typically, a six-arm star-shaped PCL functionalized by indole moieties at the chain ends is melt-blended with a bisfunctional TAD, directly resulting in a cross-linked PCL-based SMP without the need of post-curing treatment. As demonstrated by the stress relaxation measurement, the labile character of the TAD-indole adducts under stress allows for the solid-state plasticity reprocessing of the permanent shape at will by compression molding of the raw cross-linked material, while keeping excellent shape-memory properties. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Genotype Imputation with Millions of Reference Samples.
Browning, Brian L; Browning, Sharon R
2016-01-07
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
76 FR 29137 - Peace Officers Memorial Day and Police Week, 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
... courage shown by police officers, fire fighters, and first responders in New York City, Pennsylvania, and... continue to seek more efficient ways to share information and invest in evidence-based, smart-on-crime...
An energy and cost efficient majority-based RAM cell in quantum-dot cellular automata
NASA Astrophysics Data System (ADS)
Khosroshahy, Milad Bagherian; Moaiyeri, Mohammad Hossein; Navi, Keivan; Bagherzadeh, Nader
Nanotechnologies, notably quantum-dot cellular automata, have achieved major attentions for their prominent features as compared to the conventional CMOS circuitry. Quantum-dot cellular automata, particularly owning to its considerable reduction in size, high switching speed and ultra-low energy consumption, is considered as a potential alternative for the CMOS technology. As the memory unit is one of the most essential components in a digital system, designing a well-optimized QCA random access memory (RAM) cell is an important area of research. In this paper, a new five-input majority gate is presented which is suitable for implementing efficient single-layer QCA circuits. In addition, a new RAM cell with set and reset capabilities is designed based on the proposed majority gate, which has an efficient and low-energy structure. The functionality, performance and energy consumption of the proposed designs are evaluated based on the QCADesigner and QCAPro tools. According to the simulation results, the proposed RAM design leads to on average 38% lower total energy dissipation, 25% smaller area, 20% lower cell count, 28% lower delay and 60% lower QCA cost as compared to its previous counterparts.
All-photonic quantum repeaters
Azuma, Koji; Tamaki, Kiyoshi; Lo, Hoi-Kwong
2015-01-01
Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all-photonic quantum repeaters based on flying qubits. In particular, we present a protocol based on photonic cluster-state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such all-photonic quantum repeaters, the communication efficiency scales polynomially with the channel distance. Our result paves a new route towards quantum repeaters with efficient single-photon sources rather than matter quantum memories. PMID:25873153
NASA Technical Reports Server (NTRS)
Ramaswamy, Shankar; Banerjee, Prithviraj
1994-01-01
Appropriate data distribution has been found to be critical for obtaining good performance on Distributed Memory Multicomputers like the CM-5, Intel Paragon and IBM SP-1. It has also been found that some programs need to change their distributions during execution for better performance (redistribution). This work focuses on automatically generating efficient routines for redistribution. We present a new mathematical representation for regular distributions called PITFALLS and then discuss algorithms for redistribution based on this representation. One of the significant contributions of this work is being able to handle arbitrary source and target processor sets while performing redistribution. Another important contribution is the ability to handle an arbitrary number of dimensions for the array involved in the redistribution in a scalable manner. Our implementation of these techniques is based on an MPI-like communication library. The results presented show the low overheads for our redistribution algorithm as compared to naive runtime methods.
Storage of multiple single-photon pulses emitted from a quantum dot in a solid-state quantum memory.
Tang, Jian-Shun; Zhou, Zong-Quan; Wang, Yi-Tao; Li, Yu-Long; Liu, Xiao; Hua, Yi-Lin; Zou, Yang; Wang, Shuang; He, De-Yong; Chen, Geng; Sun, Yong-Nan; Yu, Ying; Li, Mi-Feng; Zha, Guo-Wei; Ni, Hai-Qiao; Niu, Zhi-Chuan; Li, Chuan-Feng; Guo, Guang-Can
2015-10-15
Quantum repeaters are critical components for distributing entanglement over long distances in presence of unavoidable optical losses during transmission. Stimulated by the Duan-Lukin-Cirac-Zoller protocol, many improved quantum repeater protocols based on quantum memories have been proposed, which commonly focus on the entanglement-distribution rate. Among these protocols, the elimination of multiple photons (or multiple photon-pairs) and the use of multimode quantum memory are demonstrated to have the ability to greatly improve the entanglement-distribution rate. Here, we demonstrate the storage of deterministic single photons emitted from a quantum dot in a polarization-maintaining solid-state quantum memory; in addition, multi-temporal-mode memory with 1, 20 and 100 narrow single-photon pulses is also demonstrated. Multi-photons are eliminated, and only one photon at most is contained in each pulse. Moreover, the solid-state properties of both sub-systems make this configuration more stable and easier to be scalable. Our work will be helpful in the construction of efficient quantum repeaters based on all-solid-state devices.
Storage of multiple single-photon pulses emitted from a quantum dot in a solid-state quantum memory
Tang, Jian-Shun; Zhou, Zong-Quan; Wang, Yi-Tao; Li, Yu-Long; Liu, Xiao; Hua, Yi-Lin; Zou, Yang; Wang, Shuang; He, De-Yong; Chen, Geng; Sun, Yong-Nan; Yu, Ying; Li, Mi-Feng; Zha, Guo-Wei; Ni, Hai-Qiao; Niu, Zhi-Chuan; Li, Chuan-Feng; Guo, Guang-Can
2015-01-01
Quantum repeaters are critical components for distributing entanglement over long distances in presence of unavoidable optical losses during transmission. Stimulated by the Duan–Lukin–Cirac–Zoller protocol, many improved quantum repeater protocols based on quantum memories have been proposed, which commonly focus on the entanglement-distribution rate. Among these protocols, the elimination of multiple photons (or multiple photon-pairs) and the use of multimode quantum memory are demonstrated to have the ability to greatly improve the entanglement-distribution rate. Here, we demonstrate the storage of deterministic single photons emitted from a quantum dot in a polarization-maintaining solid-state quantum memory; in addition, multi-temporal-mode memory with 1, 20 and 100 narrow single-photon pulses is also demonstrated. Multi-photons are eliminated, and only one photon at most is contained in each pulse. Moreover, the solid-state properties of both sub-systems make this configuration more stable and easier to be scalable. Our work will be helpful in the construction of efficient quantum repeaters based on all-solid-state devices. PMID:26468996
Parallel performance investigations of an unstructured mesh Navier-Stokes solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
2000-01-01
A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran
2017-03-01
In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
NASA Astrophysics Data System (ADS)
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.
Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B
2013-01-01
A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.
Spatial working memory load affects counting but not subitizing in enumeration.
Shimomura, Tomonari; Kumada, Takatsune
2011-08-01
The present study investigated whether subitizing reflects capacity limitations associated with two types of working memory tasks. Under a dual-task situation, participants performed an enumeration task in conjunction with either a spatial (Experiment 1) or a nonspatial visual (Experiment 2) working memory task. Experiment 1 showed that spatial working memory load affected the slope of a counting function but did not affect subitizing performance or subitizing range. Experiment 2 showed that nonspatial visual working memory load affected neither enumeration efficiency nor subitizing range. Furthermore, in both spatial and nonspatial memory tasks, neither subitizing efficiency nor subitizing range was affected by amount of imposed memory load. In all the experiments, working memory load failed to influence slope, subitizing range, or overall reaction time. These findings suggest that subitizing is performed without either spatial or nonspatial working memory. A possible mechanism of subitizing with independent capacity of working memory is discussed.
The effect of social cues on marketing decisions
NASA Astrophysics Data System (ADS)
Hentschel, H. G. E.; Pan, Jiening; Family, Fereydoon; Zhang, Zhenyu; Song, Yiping
2012-02-01
We address the question as to what extent individuals, when given information in marketing polls on the decisions made by the previous Nr individuals questioned, are likely to change their original choices. The processes can be formulated in terms of a Cost function equivalent to a Hamiltonian, which depends on the original likelihood of an individual making a positive decision in the absence of social cues p0; the strength of the social cue J; and memory size Nr. We find both positive and negative herding effects are significant. Specifically, if p0>1/2 social cues enhance positive decisions, while for p0<1/2 social cues reduce the likelihood of a positive decision.
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
Human memory reconsolidation: A guiding framework and critical review of the evidence.
Elsey, James W B; Van Ast, Vanessa A; Kindt, Merel
2018-05-24
Research in nonhuman animals suggests that reactivation can induce a transient, unstable state in a previously consolidated memory, during which the memory can be disrupted or modified, necessitating a process of restabilization in order to persist. Such findings have sparked a wave of interest into whether this phenomenon, known as reconsolidation, occurs in humans. Translating research from animal models to human experiments and even to clinical interventions is an exciting prospect, but amid this excitement, relatively little work has critically evaluated and synthesized existing research regarding human memory reconsolidation. In this review, we formalize a framework for evaluating and designing studies aiming to demonstrate human memory reconsolidation. We use this framework to shed light on reconsolidation-based research in human procedural memory, aversive and appetitive memory, and declarative memory, covering a diverse selection of the most prominent examples of this research, including studies of memory updating, retrieval-extinction procedures, and pharmacological interventions such as propranolol. Across different types of memory and procedure, there is a wealth of observations consistent with reconsolidation. Moreover, some experimental findings are already being translated into clinically relevant interventions. However, there are a number of inconsistent findings, and the presence of alternative explanations means that we cannot conclusively infer the presence of reconsolidation at the neurobiological level from current evidence. Reconsolidation remains a viable but hotly contested explanation for some observed changes in memory expression in both humans and animals. Developing effective and efficient new reconsolidation-based treatments can be a goal that unites researchers and guides future experiments. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Working memory capacity and redundant information processing efficiency.
Endres, Michael J; Houpt, Joseph W; Donkin, Chris; Finn, Peter R
2015-01-01
Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.
Dynamic Forest: An Efficient Index Structure for NAND Flash Memory
NASA Astrophysics Data System (ADS)
Yang, Chul-Woong; Yong Lee, Ki; Ho Kim, Myoung; Lee, Yoon-Joon
In this paper, we present an efficient index structure for NAND flash memory, called the Dynamic Forest (D-Forest). Since write operations incur high overhead on NAND flash memory, D-Forest is designed to minimize write operations for index updates. The experimental results show that D-Forest significantly reduces write operations compared to the conventional B+-tree.
A learnable parallel processing architecture towards unity of memory and computing
NASA Astrophysics Data System (ADS)
Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.
2015-08-01
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.
A learnable parallel processing architecture towards unity of memory and computing.
Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J
2015-08-14
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
NASA Astrophysics Data System (ADS)
Casati, R.; Saghafi, F.; Biffi, C. A.; Vedani, M.; Tuissi, A.
2017-10-01
Martensitic Ti-rich NiTi intermetallics are broadly used in various cyclic applications as actuators, which exploit the shape memory effect (SME). Recently, a new approach for exploiting austenitic Ni-rich NiTi shape memory alloys as actuators was proposed and named high-performance shape memory effect (HP-SME). HP-SME is based on thermal recovery of de-twinned martensite produced by mechanical loading of the parent phase. The aim of the manuscript consists in evaluating and comparing the fatigue and actuation properties of austenitic HP-SME wires and conventional martensitic SME wires. The effect of the thermomechanical cycling on the actuation response and the changes in the electrical resistivity of both shape memory materials were studied by performing the actuation tests at different stages of the fatigue life. Finally, the changes in the transition temperatures before and after cycling were also investigated by differential calorimetric tests.
A ground-based memory state tracker for satellite on-board computer memory
NASA Technical Reports Server (NTRS)
Quan, Alan; Angelino, Robert; Hill, Michael; Schwuttke, Ursula; Hervias, Felipe
1993-01-01
The TOPEX/POSEIDON satellite, currently in Earth orbit, will use radar altimetry to measure sea surface height over 90 percent of the world's ice-free oceans. In combination with a precise determination of the spacecraft orbit, the altimetry data will provide maps of ocean topography, which will be used to calculate the speed and direction of ocean currents worldwide. NASA's Jet Propulsion Laboratory (JPL) has primary responsibility for mission operations for TOPEX/POSEIDON. Software applications have been developed to automate mission operations tasks. This paper describes one of these applications, the Memory State Tracker, which allows the ground analyst to examine and track the contents of satellite on-board computer memory quickly and efficiently, in a human-readable format, without having to receive the data directly from the spacecraft. This process is accomplished by maintaining a groundbased mirror-image of spacecraft On-board Computer memory.
The long memory and the transaction cost in financial markets
NASA Astrophysics Data System (ADS)
Li, Daye; Nishimura, Yusaku; Men, Ming
2016-01-01
In the present work, we investigate the fractal dimensions of 30 important stock markets from 2006 to 2013; the analysis indicates that the Hurst exponent of emerging markets shifts significantly away from the standard Brownian motion. We propose a model based on the Hurst exponent to explore the considerable profits from the predictable long-term memory. We take the transaction cost into account to justify why the market inefficiency has not been arbitraged away in the majority of cases. The empirical evidence indicates that the majority of the markets are efficient with a certain transaction cost under the no-arbitrage assumption. Furthermore, we use the Monte Carlo simulation to display "the efficient frontier" of the Hurst exponent with different transaction costs.
Not all repetition is alike: Different benefits of repetition in amnesia and normal memory
Verfaellie, Mieke; Rajaram, Suparna; Fossum, Karen; Williams, Lisa
2008-01-01
While it is well known that repetition can enhance memory in amnesia, little is known about which forms of repetition are most beneficial. This study compared the effect on recognition memory of repetition of words in the same semantic context and in varied semantic contexts. To gain insight into the mechanisms by which these forms of repetition affect performance, participants were asked to make Remember/Know judgments during recognition. These judgments were used to make inferences about the contribution of recollection and familiarity to performance. For individuals with intact memory, the two forms of repetition were equally beneficial to overall recognition, and were associated with both enhanced Remember and Know responses. However, varied repetition was associated with a higher likelihood of Remember responses than was fixed repetition. The two forms of repetition also conferred equivalent benefits on overall recognition in amnesia, but in both cases, this enhancement was manifest exclusively in enhanced Know responses. We conclude that the repetition of information, and especially repetition in varied contexts, enhances recollection in individuals with intact memory, but exclusively affects familiarity in patients with severe amnesia. PMID:18419835
Declarative memory deficits and schizophrenia: problems and prospects.
Stone, William S; Hsi, Xiaolu
2011-11-01
Cognitive deficits are among the most important factors leading to poor functional outcomes in schizophrenia, with deficits in declarative memory among the largest and most robust of these. Thus far, attempts to enhance cognition in schizophrenia have shown only modest success, which underlies increasing efforts to develop effective treatment strategies. This review is divided into three main parts. The first section delineates the nature and extent of the deficits in both patients with schizophrenia and in their adult, non-psychotic relatives. The second part focuses on structural and functional abnormalities in the hippocampus, both in people with schizophrenia and in animal studies that model relevant features of the illness. The third section views problems in declarative memory and hippocampal function from the perspective of elevated rates of common medical disorders in schizophrenia, with a focus on insulin insensitivity/diabetes. The likelihood that poor glucose regulation/availability contribute to declarative memory deficits and hippocampal abnormalities is considered, along with the possibility that schizophrenia and poor glucose regulation share common etiologic elements, and with clinical implications of this perspective for enhancing declarative memory. Copyright © 2011 Elsevier Inc. All rights reserved.
Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Chun-Yi
By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less
Cache and energy efficient algorithms for Nussinov's RNA Folding.
Zhao, Chunchun; Sahni, Sartaj
2017-12-06
An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.
Metamemory in children with autism: exploring "feeling-of-knowing" in episodic and semantic memory.
Wojcik, Dominika Z; Moulin, Chris J A; Souchay, Celine
2013-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder primarily affecting social function and communication. Recently, there has been an interest in whether people with ASD also show memory deficits. Studies in ASD have revealed subtle impairments on tasks requiring participants to learn new information (episodic memory), but intact performance on general knowledge tasks (semantic memory). The novelty of this study was to explore metamemory (i.e., awareness of memory performance) and to examine whether children with ASD suffer from a generalized metamemory deficit common to all forms of memory, or would only present deficits on episodic metamemory tasks. To assess metamemory functioning we administered 2 feeling-of-knowing (FOK) tasks, 1 for episodic and 1 for semantic materials. In these tasks, participants are asked to predict the likelihood of subsequently recognizing currently unrecalled information. It was found that children with autism made inaccurate FOK predictions, but only for episodic materials. A specific deficit in meta-cognition emerges for only one set of materials. We argue that this deficit can be conceived of as reflecting a deficit in recollection, stemming from an inability to cast the self in the past and retrieve information about the study episode.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parametermore » estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.« less
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Monds, Lauren A; Paterson, Helen M; Kemp, Richard I
2017-09-01
Many eyewitness memory situations involve negative and distressing events; however, many studies investigating "false memory" phenomena use neutral stimuli only. The aim of the present study was to determine how both the Deese-Roediger-McDermott (DRM) procedure and the Misinformation Effect Paradigm tasks were related to each other using distressing and neutral stimuli. Participants completed the DRM (with negative and neutral word lists) and viewed a distressing or neutral film. Misinformation for the film was introduced and memory was assessed. Film accuracy and misinformation susceptibility were found to be greater for those who viewed the distressing film relative to the neutral film. Accuracy responses on both tasks were related, however, susceptibility to the DRM illusion and Misinformation Effect were not. The misinformation findings support the Paradoxical Negative Emotion (PNE) hypothesis that negative stimuli will lead to remembering more accurate details but also greater likelihood of memory distortion. However, the PNE hypothesis was not supported for the DRM results. The findings also suggest that the DRM and Misinformation tasks are not equivalent and may have differences in underlying mechanisms. Future research should focus on more ecologically valid methods of assessing false memory.
Introducing memory and association mechanism into a biologically inspired visual model.
Qiao, Hong; Li, Yinlin; Tang, Tang; Wang, Peng
2014-09-01
A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.
NASA Astrophysics Data System (ADS)
Wang, Chenjie; Huo, Zongliang; Liu, Ziyu; Liu, Yu; Cui, Yanxiang; Wang, Yumei; Li, Fanghua; Liu, Ming
2013-07-01
The effects of interfacial fluorination on the metal/Al2O3/HfO2/SiO2/Si (MAHOS) memory structure have been investigated. By comparing MAHOS memories with and without interfacial fluorination, it was identified that the deterioration of the performance and reliability of MAHOS memories is mainly due to the formation of an interfacial layer that generates excess oxygen vacancies at the interface. Interfacial fluorination suppresses the growth of the interfacial layer, which is confirmed by X-ray photoelectron spectroscopy depth profile analysis, increases enhanced program/erase efficiency, and improves data retention characteristics. Moreover, it was observed that fluorination at the SiO-HfO interface achieves a more effective performance enhancement than that at the HfO-AlO interface.
PREMIX: PRivacy-preserving EstiMation of Individual admiXture.
Chen, Feng; Dow, Michelle; Ding, Sijie; Lu, Yao; Jiang, Xiaoqian; Tang, Hua; Wang, Shuang
2016-01-01
In this paper we proposed a framework: PRivacy-preserving EstiMation of Individual admiXture (PREMIX) using Intel software guard extensions (SGX). SGX is a suite of software and hardware architectures to enable efficient and secure computation over confidential data. PREMIX enables multiple sites to securely collaborate on estimating individual admixture within a secure enclave inside Intel SGX. We implemented a feature selection module to identify most discriminative Single Nucleotide Polymorphism (SNP) based on informativeness and an Expectation Maximization (EM)-based Maximum Likelihood estimator to identify the individual admixture. Experimental results based on both simulation and 1000 genome data demonstrated the efficiency and accuracy of the proposed framework. PREMIX ensures a high level of security as all operations on sensitive genomic data are conducted within a secure enclave using SGX.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Dieu-Hang, To; Grafton, R Quentin; Martínez-Espiñeira, Roberto; Garcia-Valiñas, Maria
2017-07-15
Using a household-based data set of more than 12,000 households from 11 OECD countries, we analyse the factors underlying the decision by households to adopt energy-efficient and water-efficient equipment. We evaluate the roles of both attitudes and labelling schemes on the adoption of energy and water-efficient equipment, and also the interaction and complementarity between energy and water conservation behaviours. Our findings show: one, 'green' social norms and favourable attitudes towards the environment are associated with an increased likelihood of households' adoption of energy and water-efficient appliances; two, households' purchase decisions are positively affected by their awareness, understanding, and trust of labelling schemes; and three, there is evidence of complementarity between energy conservation and water conservation behaviours. Copyright © 2017 Elsevier Ltd. All rights reserved.
Memory is Not Enough: The Neurobiological Substrates of Dynamic Cognitive Reserve.
Serra, Laura; Bruschini, Michela; Di Domenico, Carlotta; Gabrielli, Giulia Bechi; Marra, Camillo; Caltagirone, Carlo; Cercignani, Mara; Bozzali, Marco
2017-01-01
Changes in the residual memory variance are considered as a dynamic aspect of cognitive reserve (d-CR). We aimed to investigate for the first time the neural substrate associated with changes in the residual memory variance overtime in patients with amnestic mild cognitive impairment (aMCI). Thirty-four aMCI patients followed-up for 36 months and 48 healthy elderly individuals (HE) were recruited. All participants underwent 3T MRI, collecting T1-weighted images for voxel-based morphometry (VBM). They underwent an extensive neuropsychological battery, including six episodic memory tests. In patients and controls, factor analyses were used on the episodic memory scores to obtain a composite memory score (C-MS). Partial Least Square analyses were used to decompose the variance of C-MS in latent variables (LT scores), accounting for demographic variables and for the general cognitive efficiency level; linear regressions were applied on LT scores, striping off any contribution of general cognitive abilities, to obtain the residual value of memory variance, considered as an index of d-CR. LT scores and d-CR were used in discriminant analysis, in patients only. Finally, LT scores and d-CR were used as variable of interest in VBM analysis. The d-CR score was not able to correctly classify patients. In both aMCI patients and HE, LT1st and d-CR scores showed correlations with grey matter volumes in common and in specific brain areas. Using CR measures limited to assess memory function is likely less sensitive to detect the cognitive decline and predict the evolution of Alzheimer's disease. In conclusion, d-CR needs a measure of general cognition to identify conversion to Alzheimer's disease efficiently.
A case of hyperthymesia: Rethinking the role of the amygdala in autobiographical memory
Ally, Brandon A.; Hussey, Erin P.; Donahue, Manus J.
2012-01-01
Much controversy has been focused on the extent to which the amygdala belongs to the autobiographical memory core network. Early evidence suggested the amygdala played a vital role in emotional processing, likely helping to encode emotionally charged stimuli. However, recent work has highlighted the amygdala’s role in social and self-referential processing, leading to speculation that the amygdala likely supports the encoding and retrieval of autobiographical memory. Here, cognitive as well as structural and functional magnetic resonance imaging data was collected from an extremely rare individual with near-perfect autobiographical memory, or hyperthymesia. Right amygdala hypertrophy (approximately 20%) and enhanced amygdala-to-hippocampus connectivity (> 10 standard deviations) was observed in this volunteer relative to controls. Based on these findings and previous literature, we speculate that the amygdala likely charges autobiographical memories with emotional, social, and self-relevance. In heightened memory, this system may be hyperactive, allowing for many types of autobiographical information, including emotionally benign, to be more efficiently processed as self-relevant for encoding and storage. PMID:22519463
Syntactic Recursion Facilitates and Working Memory Predicts Recursive Theory of Mind
Arslan, Burcu; Hohenberger, Annette; Verbrugge, Rineke
2017-01-01
In this study, we focus on the possible roles of second-order syntactic recursion and working memory in terms of simple and complex span tasks in the development of second-order false belief reasoning. We tested 89 Turkish children in two age groups, one younger (4;6–6;5 years) and one older (6;7–8;10 years). Although second-order syntactic recursion is significantly correlated with the second-order false belief task, results of ordinal logistic regressions revealed that the main predictor of second-order false belief reasoning is complex working memory span. Unlike simple working memory and second-order syntactic recursion tasks, the complex working memory task required processing information serially with additional reasoning demands that require complex working memory strategies. Based on our results, we propose that children’s second-order theory of mind develops when they have efficient reasoning rules to process embedded beliefs serially, thus overcoming a possible serial processing bottleneck. PMID:28072823
Song, Wei; Zhang, Kai; Sun, Jinhua; Ma, Lina; Jesse, Forrest Fabian; Teng, Xiaochun; Zhou, Ying; Bao, Hechen; Chen, Shiqing; Wang, Shuai; Yang, Beimeng; Chu, Xixia; Ding, Wenhua; Du, Yasong; Cheng, Zaohuo; Wu, Bin; Chen, Shanguang; He, Guang; He, Lin; Chen, Xiaoping; Li, Weidong
2013-01-01
People with neuropsychiatric disorders such as schizophrenia often display deficits in spatial working memory and attention. Evaluating working memory and attention in schizophrenia patients is usually based on traditional tasks and the interviewer's judgment. We developed a simple Spatial Working Memory and Attention Test on Paired Symbols (SWAPS). It takes only several minutes to complete, comprising 101 trials for each subject. In this study, we tested 72 schizophrenia patients and 188 healthy volunteers in China. In a healthy control group with ages ranging from 12 to 60, the efficiency score (accuracy divided by reaction time) reached a peak in the 20-27 age range and then declined with increasing age. Importantly, schizophrenia patients failed to display this developmental trend in the same age range and adults had significant deficits compared to the control group. Our data suggests that this simple Spatial Working Memory and Attention Test on Paired Symbols can be a useful tool for studies of spatial working memory and attention in neuropsychiatric disorders.
Killing of targets by effector CD8 T cells in the mouse spleen follows the law of mass action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganusov, Vitaly V
2009-01-01
In contrast with antibody-based vaccines, it has been difficult to measure the efficacy of T cell-based vaccines and to correlate the efficacy of CD8 T cell responses with protection again viral infections. In part, this difficulty is due to poor understanding of the in vivo efficacy of CD8 T cells produced by vaccination. Using a: recently developed experimental method of in vivo cytotoxicity we have investigated quantitative aspects of killing of peptide-pulsed targets by effector and memory CD8 T cells, specific to three epitopes of lymphocytic choriomeningitis virus (LCMV), in the mouse spleen. By analyzing data on killing of targetsmore » with varying number of epitope-specific effector and memory CD8 T cells, we find that killing of targets by effectors follows the law of mass-action, that is the death rate of peptide-pulsed targets is proportional to the frequency of CTLs in the spleen. In contrast, killing of targets by memory CD8 T cells does not follow the mass action law because the death rate of targets saturates at high frequencies of memory CD8 T cells. For both effector and memory cells, we also find little support for the killing term that includes the decrease of the death rate of targets with target cell density. Interestingly, our analysis suggests that at low CD8 T cell frequencies, memory CD8 T cells on the per capita basis are more efficient at killing peptide-pulsed targets than effectors, but at high frequencies, effectors are more efficient killers than memory T cells. Comparison of the estimated killing efficacy of effector T cells with the value that is predicted from theoretical physics and based on motility of T cells in lymphoid tissues, suggests that limiting step in the killing of peptide-pulsed targets is delivering the lethal hit and not finding the target. Our results thus form a basis for quantitative understanding of the process of killing of virus-infected cells by T cell responses in tissues and can be used to correlate the phenotype of vaccine-induced memory CD8 T cells with their killing efficacy in vivo.« less
Benoit, Roland G; Schacter, Daniel L
2015-08-01
It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of expected core-network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network's nodes as well as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions' specialized contributions and interactions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Experience-Driven Formation of Parts-Based Representations in a Model of Layered Visual Memory
Jitsev, Jenia; von der Malsburg, Christoph
2009-01-01
Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces. PMID:19862345
High-performance Raman memory with spatio-temporal reversal
NASA Astrophysics Data System (ADS)
Vernaz-Gris, Pierre; Tranter, Aaron D.; Everett, Jesse L.; Leung, Anthony C.; Paul, Karun V.; Campbell, Geoff T.; Lam, Ping Koy; Buchler, Ben C.
2018-05-01
A number of techniques exist to use an ensemble of atoms as a quantum memory for light. Many of these propose to use backward retrieval as a way to improve the storage and recall efficiency. We report on a demonstration of an off-resonant Raman memory that uses backward retrieval to achieve an efficiency of $65\\pm6\\%$ at a storage time of one pulse duration. The memory has a characteristic decay time of 60 $\\mu$s, corresponding to a delay-bandwidth product of $160$.
Galloway, Claire R; Lebois, Evan P; Shagarabi, Shezza L; Hernandez, Norma A; Manns, Joseph R
2014-01-01
Acetylcholine signaling through muscarinic receptors has been shown to benefit memory performance in some conditions, but pan-muscarinic activation also frequently leads to peripheral side effects. Drug therapies that selectively target M1 or M4 muscarinic receptors could potentially improve memory while minimizing side effects mediated by the other muscarinic receptor subtypes. The ability of three recently developed drugs that selectively activate M1 or M4 receptors to improve recognition memory was tested by giving Long-Evans rats subcutaneous injections of three different doses of the M1 agonist VU0364572, the M1 positive allosteric modulator BQCA or the M4 positive allosteric modulator VU0152100 before performing an object recognition memory task. VU0364572 at 0.1 mg/kg, BQCA at 1.0 mg/kg and VU0152100 at 3.0 and 30.0 mg/kg improved the memory performance of rats that performed poorly at baseline, yet the improvements in memory performance were the most statistically robust for VU0152100 at 3.0 mg/kg. The results suggested that selective M1 and M4 receptor activation each improved memory but that the likelihood of obtaining behavioral efficacy at a given dose might vary between subjects even in healthy groups depending on baseline performance. These results also highlighted the potential of drug therapies that selectively target M1 or M4 receptors to improve memory performance in individuals with impaired memory.
Crane, Catherine; Barnhofer, Thorsten; Williams, J. Mark G.
2007-01-01
Previously depressed and never-depressed individuals identified personal characteristics (self-guides) defining their ideal, ought, and feared selves. One week later they completed the autobiographical memory test (AMT). For each participant the number of AMT cues that reflected self-guide content was determined to produce an index of AMT cue self-relevance. Individuals who had never been depressed showed no significant relationship between cue self-relevance and specificity. In contrast, in previously depressed participants there was a highly significant negative correlation between cue self-relevance and specificity—the greater the number of AMT cues that reflected self-guide content, the fewer specific memories participants recalled. It is suggested that in individuals with a history of depression, cues reflecting self-guide content are more likely to prompt a shift to processing of information within the long-term self (Conway, Singer, & Tagini, 2004), increasing the likelihood that self-related semantic information will be provided in response to cues on the autobiographical memory test. PMID:17454667
Crane, Catherine; Barnhofer, Thorsten; Mark, J; Williams, G
2007-04-01
Previously depressed and never-depressed individuals identified personal characteristics (self-guides) defining their ideal, ought, and feared selves. One week later they completed the autobiographical memory test (AMT). For each participant the number of AMT cues that reflected self-guide content was determined to produce an index of AMT cue self-relevance. Individuals who had never been depressed showed no significant relationship between cue self-relevance and specificity. In contrast, in previously depressed participants there was a highly significant negative correlation between cue self-relevance and specificity--the greater the number of AMT cues that reflected self-guide content, the fewer specific memories participants recalled. It is suggested that in individuals with a history of depression, cues reflecting self-guide content are more likely to prompt a shift to processing of information within the long-term self (Conway, Singer, & Tagini, 2004), increasing the likelihood that self-related semantic information will be provided in response to cues on the autobiographical memory test.
Gender differences in working memory networks: A BrainMap meta-analysis
Hill, Ashley C.; Laird, Angela R.; Robinson, Jennifer L.
2014-01-01
Gender differences in psychological processes have been of great interest in a variety of fields. While the majority of research in this area has focused on specific differences in relation to test performance, this study sought to determine the underlying neurofunctional differences observed during working memory, a pivotal cognitive process shown to be predictive of academic achievement and intelligence. Using the BrainMap database, we performed a meta-analysis and applied activation likelihood estimation to our search set. Our results demonstrate consistent working memory networks across genders, but also provide evidence for gender-specific networks whereby females consistently activate more limbic (e.g., amygdala and hippocampus) and prefrontal structures (e.g., right inferior frontal gyrus), and males activate a distributed network inclusive of more parietal regions. These data provide a framework for future investigation using functional or effective connectivity methods to elucidate the underpinnings of gender differences in neural network recruitment during working memory tasks. PMID:25042764
Gender differences in working memory networks: a BrainMap meta-analysis.
Hill, Ashley C; Laird, Angela R; Robinson, Jennifer L
2014-10-01
Gender differences in psychological processes have been of great interest in a variety of fields. While the majority of research in this area has focused on specific differences in relation to test performance, this study sought to determine the underlying neurofunctional differences observed during working memory, a pivotal cognitive process shown to be predictive of academic achievement and intelligence. Using the BrainMap database, we performed a meta-analysis and applied activation likelihood estimation to our search set. Our results demonstrate consistent working memory networks across genders, but also provide evidence for gender-specific networks whereby females consistently activate more limbic (e.g., amygdala and hippocampus) and prefrontal structures (e.g., right inferior frontal gyrus), and males activate a distributed network inclusive of more parietal regions. These data provide a framework for future investigations using functional or effective connectivity methods to elucidate the underpinnings of gender differences in neural network recruitment during working memory tasks. Copyright © 2014 Elsevier B.V. All rights reserved.
Kredlow, M. Alexandra; Unger, Leslie D.; Otto, Michael W.
2015-01-01
A new understanding of the mechanisms of memory retrieval and reconsolidation holds the potential for improving exposure-based treatments. Basic research indicates that following fear extinction, safety and fear memories may compete, raising the possibility of return of fear. One possible solution is to modify original fear memories through reconsolidation interference, reducing the likelihood of return of fear. Post-retrieval extinction is a behavioral method of reconsolidation interference that has been explored in the context of conditioned fear and appetitive memory paradigms. This meta-analysis examines the magnitude of post-retrieval extinction effects and potential moderators of these effects. A PubMed and PsycINFO search was conducted through June 2014. Sixty-three comparisons examining post-retrieval extinction for preventing the return of fear or appetitive responses in animals or humans met inclusion criteria. Post-retrieval extinction demonstrated a significant, small-to-moderate effect (g = .40) for further reducing the return of fear in humans and a significant, large effect (g = 0.89) for preventing the return of appetitive responses in animals relative to standard extinction. For fear outcomes in animals, effects were small (g = 0.21) and non-significant, but moderated by the number of animals housed together and the duration of time between post-retrieval extinction/extinction and test. Across paradigms, these findings support the efficacy of this pre-clinical strategy for preventing the return of conditioned fear and appetitive responses. Overall, findings to date support the continued translation of post-retrieval extinction research to human and clinical applications, with particular application to the treatment of anxiety, traumatic stress, and substance use disorders. PMID:26689086
Hypercorrection of high confidence errors in lexical representations.
Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa
2013-08-01
Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly.
The Relationship between Feelings-of-Knowing and Partial Knowledge for General Knowledge Questions
Norman, Elisabeth; Blakstad, Oskar; Johnsen, Øivind; Martinsen, Stig K.; Price, Mark C.
2016-01-01
Feelings of knowing (FoK) are introspective self-report ratings of the felt likelihood that one will be able to recognize a currently unrecallable memory target. Previous studies have shown that FoKs are influenced by retrieved fragment knowledge related to the target, which is compatible with the accessibility hypothesis that FoK is partly based on currently activated partial knowledge about the memory target. However, previous results have been inconsistent as to whether or not FoKs are influenced by the accuracy of such information. In our study (N = 26), we used a recall-judge-recognize procedure where stimuli were general knowledge questions. The measure of partial knowledge was wider than those applied previously, and FoK was measured before rather than after partial knowledge. The accuracy of reported partial knowledge was positively related to subsequent recognition accuracy, and FoK only predicted recognition on trials where there was correct partial knowledge. Importantly, FoK was positively related to the amount of correct partial knowledge, but did not show a similar incremental relation with incorrect knowledge. PMID:27445950
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sancho Pitarch, Jose Carlos; Kerbyson, Darren; Lang, Mike
Increasing the core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend to cope with this challenge is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the effectiveness of this approach on the performance of parallel scientific applications. Specifically, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers. Experiments conducted on two current state-of-the-art multicore processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, for amore » wide range of production applications shows that there is a diminishing return when increasing the number of memory channels per memory controller. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers with each with one channel compared with one controller with two memory channels.« less
A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking
NASA Technical Reports Server (NTRS)
Schmickley, Ronald D.; Bailey, David H.
1987-01-01
A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.
Wang, Longfei; Lee, Sungyoung; Gim, Jungsoo; Qiao, Dandi; Cho, Michael; Elston, Robert C; Silverman, Edwin K; Won, Sungho
2016-09-01
Family-based designs have been repeatedly shown to be powerful in detecting the significant rare variants associated with human diseases. Furthermore, human diseases are often defined by the outcomes of multiple phenotypes, and thus we expect multivariate family-based analyses may be very efficient in detecting associations with rare variants. However, few statistical methods implementing this strategy have been developed for family-based designs. In this report, we describe one such implementation: the multivariate family-based rare variant association tool (mFARVAT). mFARVAT is a quasi-likelihood-based score test for rare variant association analysis with multiple phenotypes, and tests both homogeneous and heterogeneous effects of each variant on multiple phenotypes. Simulation results show that the proposed method is generally robust and efficient for various disease models, and we identify some promising candidate genes associated with chronic obstructive pulmonary disease. The software of mFARVAT is freely available at http://healthstat.snu.ac.kr/software/mfarvat/, implemented in C++ and supported on Linux and MS Windows. © 2016 WILEY PERIODICALS, INC.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Viscoelastic property identification from waveform reconstruction
NASA Astrophysics Data System (ADS)
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-05-01
An inverse method is proposed for the determination of the viscoelastic properties of material plates from the plane-wave transmitted acoustic field. Innovations lie in a two-step inversion scheme based on the well-known maximum-likelihood principle with an analytic signal formulation. In addition, establishing the analytical formulations of the plate transmission coefficient we implement an efficient and slightly noise-sensitive process suited to both very thin plates and strongly dispersive media.
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1982-01-01
A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.
Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmad, A; Ahmadov, F; Aielli, G; Åkesson, T P A; Akimoto, G; Akimov, A V; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Baker, S; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Banfi, D; Bangert, A; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battistin, M; Bauer, F; Bawa, H S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belloni, A; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertolucci, F; Besana, M I; Besjes, G J; Bessidskaia, O; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, G; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Christidi, I A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Almenar, C Cuenca; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Cunha Sargedas De Sousa, M J Da; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darlea, G L; Darmora, S; Dassoulas, J A; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobos, D; Dobson, E; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Anjos, A Dos; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gandrajula, R P; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godfrey, J; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Grybel, K; Guan, L; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Gunther, J; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageboeck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Heisterkamp, S; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hofmann, J I; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Ivarsson, J; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Karnevskiy, M; Karpov, S N; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Keller, J S; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le, B T; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leone, R; Leonhardt, K; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matricon, P; Matsunaga, H; Matsushita, T; Mättig, P; Mättig, S; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Moya, M Miñano; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Moeller, V; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Moraes, A; Morange, N; Morel, J; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petteni, M; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reinsch, A; Reisin, H; Relich, M; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Satsounkevitch, I; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherwood, P; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snow, J; Snyder, S; Sobie, R; Socher, F; Sodomka, J; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, V; Sopko, B; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spighi, R; Spigo, G; Spousta, M; Spreitzer, T; Spurlock, B; Denis, R D St; Staerz, S; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steele, G; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tamsett, M C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vankov, P; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, W; Wagner, P; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, J A; Wilson, A; Wingerter-Seez, I; Winklmeier, F; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L
A likelihood-based discriminant for the identification of quark- and gluon-initiated jets is built and validated using 4.7 fb[Formula: see text] of proton-proton collision data at [Formula: see text] [Formula: see text] collected with the ATLAS detector at the LHC. Data samples with enriched quark or gluon content are used in the construction and validation of templates of jet properties that are the input to the likelihood-based discriminant. The discriminating power of the jet tagger is established in both data and Monte Carlo samples within a systematic uncertainty of [Formula: see text] 10-20 %. In data, light-quark jets can be tagged with an efficiency of [Formula: see text] while achieving a gluon-jet mis-tag rate of [Formula: see text] in a [Formula: see text] range between [Formula: see text] and [Formula: see text] for jets in the acceptance of the tracker. The rejection of gluon-jets found in the data is significantly below what is attainable using a Pythia 6 Monte Carlo simulation, where gluon-jet mis-tag rates of 10 % can be reached for a 50 % selection efficiency of light-quark jets using the same jet properties.
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
Cognitive Control Network Contributions to Memory-Guided Visual Attention.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2016-05-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Anderson, Eric C
2012-11-08
Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
ERIC Educational Resources Information Center
Kensinger, Elizabeth A.; Schacter, Daniel L.
2007-01-01
Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…
ERIC Educational Resources Information Center
Spaniol, Julia; Davidson, Patrick S. R.; Kim, Alice S. N.; Han, Hua; Moscovitch, Morris; Grady, Cheryl L.
2009-01-01
The recent surge in event-related fMRI studies of episodic memory has generated a wealth of information about the neural correlates of encoding and retrieval processes. However, interpretation of individual studies is hampered by methodological differences, and by the fact that sample sizes are typically small. We submitted results from studies of…
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Execution time supports for adaptive scientific algorithms on distributed memory machines
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey
1990-01-01
Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.
Execution time support for scientific programs on distributed memory machines
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey
1990-01-01
Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.
Development of Low Parasitic Light Sensitivity and Low Dark Current 2.8 μm Global Shutter Pixel †
Yokoyama, Toshifumi; Tsutsui, Masafumi; Suzuki, Masakatsu; Nishi, Yoshiaki; Mizuno, Ikuo; Lahav, Assaf
2018-01-01
We developed a low parasitic light sensitivity (PLS) and low dark current 2.8 μm global shutter pixel. We propose a new inner lens design concept to realize both low PLS and high quantum efficiency (QE). 1/PLS is 7700 and QE is 62% at a wavelength of 530 nm. We also propose a new storage-gate based memory node for low dark current. P-type implants and negative gate biasing are introduced to suppress dark current at the surface of the memory node. This memory node structure shows the world smallest dark current of 9.5 e−/s at 60 °C. PMID:29370146
Development of Low Parasitic Light Sensitivity and Low Dark Current 2.8 μm Global Shutter Pixel.
Yokoyama, Toshifumi; Tsutsui, Masafumi; Suzuki, Masakatsu; Nishi, Yoshiaki; Mizuno, Ikuo; Lahav, Assaf
2018-01-25
Abstract : We developed a low parasitic light sensitivity (PLS) and low dark current 2.8 μm global shutter pixel. We propose a new inner lens design concept to realize both low PLS and high quantum efficiency (QE). 1/PLS is 7700 and QE is 62% at a wavelength of 530 nm. We also propose a new storage-gate based memory node for low dark current. P-type implants and negative gate biasing are introduced to suppress dark current at the surface of the memory node. This memory node structure shows the world smallest dark current of 9.5 e - /s at 60 °C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Peong-Hwa; Lee, Seo-Won, E-mail: swlee-sci@korea.ac.kr, E-mail: kj-lee@korea.ac.kr; Song, Kyungmi
2015-11-16
Interfacial Dzyaloshinskii-Moriya interaction in ferromagnet/heavy metal bilayers is recently of considerable interest as it offers an efficient control of domain walls and the stabilization of magnetic skyrmions. However, its effect on the performance of perpendicular spin transfer torque memory has not been explored yet. We show based on numerical studies that the interfacial Dzyaloshinskii-Moriya interaction decreases the thermal energy barrier while increases the switching current. As high thermal energy barrier as well as low switching current is required for the commercialization of spin torque memory, our results suggest that the interfacial Dzyaloshinskii-Moriya interaction should be minimized for spin torque memorymore » applications.« less
Face recognition by applying wavelet subband representation and kernel associative memory.
Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam
2004-01-01
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.
El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher
2016-11-01
The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hennig-Fast, Kristina; Meister, Franziska; Frodl, Thomas; Beraldi, Anna; Padberg, Frank; Engel, Rolf R; Reiser, Maximilian; Möller, Hans-Jürgen; Meindl, Thomas
2008-10-01
Autobiographical memory relies on complex interactions between episodic memory contents, associated emotions and a sense of self-continuity over the course of one's life. This paper reports a study based upon the case of the patient NN who suffered from a complete loss of autobiographical memory and awareness of identity subsequent to a dissociative fugue. Neuropsychological, behavioral, and functional neuroimaging tests converged on the conclusion that NN suffered from a selective retrograde amnesia following an episode of dissociative fugue, during which he had lost explicit knowledge and vivid memory of his personal past. NN's loss of self-related memories was mirrored in neurobiological changes after the fugue whereas his semantic memory remained intact. Although NN still claimed to suffer from a stable loss of autobiographical, self-relevant memories 1 year after the fugue state, a proportionate improvement in underlying fronto-temporal neuronal networks was evident at this point in time. In spite of this improvement in neuronal activation, his anterograde visual memory had been decreased. It is posited that our data provide evidence for the important role of visual processing in autobiographical memory as well as for the efficiency of protective control mechanisms that constitute functional retrograde amnesia.
Efficient entanglement distillation without quantum memory.
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman
2016-05-31
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.
Efficient entanglement distillation without quantum memory
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman
2016-01-01
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Motivation and short-term memory in visual search: Attention's accelerator revisited.
Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton
2018-05-01
A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications
NASA Astrophysics Data System (ADS)
Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei
2007-04-01
In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.
Ihne, Jessica L; Gallagher, Natalie M; Sullivan, Marie; Callicott, Joseph H; Green, Adam E
2016-01-01
Perhaps the most widely studied effect to emerge from the combination of neuroimaging and human genetics is the association of the COMT-Val(108/158)Met polymorphism with prefrontal activity during working memory. COMT-Val is a putative risk factor in schizophrenia, which is characterized by disordered prefrontal function. Work in healthy populations has sought to characterize mechanisms by which the valine (Val) allele may lead to disadvantaged prefrontal cognition. Lower activity in methionine (Met) carriers has been interpreted as advantageous neural efficiency. Notably, however, studies reporting COMT effects on neural efficiency have generally not reported working memory performance effects. Those studies have employed relatively low/easy working memory loads. Higher loads are known to elicit individual differences in working memory performance that are not visible at lower loads. If COMT-Met confers greater neural efficiency when working memory is easy, a reasonable prediction is that Met carriers will be better able to cope with increasing demand for neural resources when working memory becomes difficult. To our knowledge, this prediction has thus far gone untested. Here, we tested performance on three working memory tasks. Performance on each task was measured at multiple levels of load/difficulty, including loads more demanding than those used in prior studies. We found no genotype-by-load interactions or main effects of COMT genotype on accuracy or reaction time. Indeed, even testing for performance differences at each load of each task failed to find a single significant effect of COMT genotype. Thus, even if COMT genotype has the effects on prefrontal efficiency that prior work has suggested, such effects may not directly impact high-load working memory ability. The present findings accord with previous evidence that behavioral effects of COMT are small or nonexistent and, more broadly, with a growing consensus that substantial effects on phenotype will not emerge from candidate gene studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DSP code optimization based on cache
NASA Astrophysics Data System (ADS)
Xu, Chengfa; Li, Chengcheng; Tang, Bin
2013-03-01
DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.
Mathy, Fabien; Fartoukh, Michael; Gauvrit, Nicolas; Guida, Alessandro
2016-01-01
Both adults and children -by the time they are 2-3 years old- have a general ability to recode information to increase memory efficiency. This paper aims to evaluate the ability of untrained children aged 6-10 years old to deploy such a recoding process in immediate memory. A large sample of 374 children were given a task of immediate serial report based on SIMON®, a classic memory game made of four colored buttons (red, green, yellow, blue) requiring players to reproduce a sequence of colors within which repetitions eventually occur. It was hypothesized that a primitive ability across all ages (since theoretically already available in toddlers) to detect redundancies allows the span to increase whenever information can be recoded on the fly. The chunkable condition prompted the formation of chunks based on the perceived structure of color repetition within to-be-recalled sequences of colors. Our result shows a similar linear improvement of memory span with age for both chunkable and non-chunkable conditions. The amount of information retained in immediate memory systematically increased for the groupable sequences across all age groups, independently of the average age-group span that was measured on sequences that contained fewer repetitions. This result shows that chunking gives young children an equal benefit as older children. We discuss the role of recoding in the expansion of capacity in immediate memory and the potential role of data compression in the formation of chunks in long-term memory.
Mathy, Fabien; Fartoukh, Michael; Gauvrit, Nicolas; Guida, Alessandro
2016-01-01
Both adults and children –by the time they are 2–3 years old– have a general ability to recode information to increase memory efficiency. This paper aims to evaluate the ability of untrained children aged 6–10 years old to deploy such a recoding process in immediate memory. A large sample of 374 children were given a task of immediate serial report based on SIMON®, a classic memory game made of four colored buttons (red, green, yellow, blue) requiring players to reproduce a sequence of colors within which repetitions eventually occur. It was hypothesized that a primitive ability across all ages (since theoretically already available in toddlers) to detect redundancies allows the span to increase whenever information can be recoded on the fly. The chunkable condition prompted the formation of chunks based on the perceived structure of color repetition within to-be-recalled sequences of colors. Our result shows a similar linear improvement of memory span with age for both chunkable and non-chunkable conditions. The amount of information retained in immediate memory systematically increased for the groupable sequences across all age groups, independently of the average age-group span that was measured on sequences that contained fewer repetitions. This result shows that chunking gives young children an equal benefit as older children. We discuss the role of recoding in the expansion of capacity in immediate memory and the potential role of data compression in the formation of chunks in long-term memory. PMID:26941675
Decreasing Cognitive Load for Learners: Strategy of Web-Based Foreign Language Learning
ERIC Educational Resources Information Center
Zhang, Jianfeng
2013-01-01
Cognitive load is one of the important factors that influence the effectiveness and efficiency of web-based foreign language learning. Cognitive load theory assumes that human's cognitive capacity in working memory is limited and if it overloads, learning will be hampered, so that high level of cognitive load can affect the performance of learning…
Effect of virtual memory on efficient solution of two model problems
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.
1977-01-01
Computers with virtual memory architecture allow programs to be written as if they were small enough to be contained in memory. Two types of problems are investigated to show that this luxury can lead to quite an inefficient performance if the programmer does not interact strongly with the characteristics of the operating system when developing the program. The two problems considered are the simultaneous solutions of a large linear system of equations by Gaussian elimination and a model three-dimensional finite-difference problem. The Control Data STAR-100 computer runs are made to demonstrate the inefficiencies of programming the problems in the manner one would naturally do if the problems were indeed, small enough to be contained in memory. Program redesigns are presented which achieve large improvements in performance through changes in the computational procedure and the data base arrangement.
A multilevel nonvolatile magnetoelectric memory
NASA Astrophysics Data System (ADS)
Shen, Jianxin; Cong, Junzhuang; Shang, Dashan; Chai, Yisheng; Shen, Shipeng; Zhai, Kun; Sun, Young
2016-09-01
The coexistence and coupling between magnetization and electric polarization in multiferroic materials provide extra degrees of freedom for creating next-generation memory devices. A variety of concepts of multiferroic or magnetoelectric memories have been proposed and explored in the past decade. Here we propose a new principle to realize a multilevel nonvolatile memory based on the multiple states of the magnetoelectric coefficient (α) of multiferroics. Because the states of α depends on the relative orientation between magnetization and polarization, one can reach different levels of α by controlling the ratio of up and down ferroelectric domains with external electric fields. Our experiments in a device made of the PMN-PT/Terfenol-D multiferroic heterostructure confirm that the states of α can be well controlled between positive and negative by applying selective electric fields. Consequently, two-level, four-level, and eight-level nonvolatile memory devices are demonstrated at room temperature. This kind of multilevel magnetoelectric memory retains all the advantages of ferroelectric random access memory but overcomes the drawback of destructive reading of polarization. In contrast, the reading of α is nondestructive and highly efficient in a parallel way, with an independent reading coil shared by all the memory cells.
Individual differences in imagination inflation.
Heaps, C; Nash, M
1999-06-01
Garry, Manning, Loftus, and Sherman (1996) found that when adult subjects imagined childhood events, these events were subsequentlyjudged as more likely to have occurred than were not-imagined events. The authors termed this effect imagination inflation. We replicated the effect, using a novel set of Life Events Inventory events. Further, we tested whether the effect is related to four subject characteristics possibly associated with false memory creation. The extent to which subjects inflated judged likelihood following imagined events was associated with indices of hypnotic suggestibility and dissociativity, but not with vividness of imagery or interrogative suggestibility. Results suggest that imagination plays a role in subsequent likelihood judgments regarding childhood events, and that some individuals are more likely than others to experience imagination inflation.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Parametric State Space Structuring
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Tilgner, Marco
1997-01-01
Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.
Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye
2014-02-01
Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.
Efficiency of Energy Harvesting in Ni-Mn-Ga Shape Memory Alloys
NASA Astrophysics Data System (ADS)
Lindquist, Paul; Hobza, Tony; Patrick, Charles; Müllner, Peter
2018-03-01
Many researchers have reported on the voltage and power generated while energy harvesting using Ni-Mn-Ga shape memory alloys; few researchers report on the power conversion efficiency of energy harvesting. We measured the magneto-mechanical behavior and energy harvesting of Ni-Mn-Ga shape memory alloys to quantify the efficiency of energy harvesting using the inverse magneto-plastic effect. At low frequencies, less than 150 Hz, the power conversion efficiency is less than 0.1%. Power conversion efficiency increases with (i) increasing actuation frequency, (ii) increasing actuation stroke, and (iii) decreasing twinning stress. Extrapolating the results of low-frequency experiments to the kHz actuation regime yields a power conversion factor of about 20% for 3 kHz actuation frequency, 7% actuation strain, and 0.05 MPa twinning stress.
Forming free and ultralow-power erase operation in atomically crystal TiO2 resistive switching
NASA Astrophysics Data System (ADS)
Dai, Yawei; Bao, Wenzhong; Hu, Linfeng; Liu, Chunsen; Yan, Xiao; Chen, Lin; Sun, Qingqing; Ding, Shijin; Zhou, Peng; Zhang, David Wei
2017-06-01
Two-dimensional layered materials (2DLMs) have attracted broad interest from fundamental sciences to industrial applications. Their applications in memory devices have been demonstrated, yet much still remains to explore optimal materials and device structure for practical application. In this work, a forming-free, bipolar resistive switching behavior are demonstrated in 2D TiO2-based resistive random access memory (RRAM). Physical adsorption method is adopted to achieve high quality, continuous 2D TiO2 network efficiently. The 2D TiO2 RRAM devices exhibit superior properties such as fast switching capability (20 ns of erase operation) and extremely low erase energy consumption (0.16 fJ). Furthermore, the resistive switching mechanism is attributed to the formation and rupture of oxygen vacancies-based percolation path in 2D TiO2 crystals. Our results pave the way for the implementation of high performance 2DLMs-based RRAM in the next generation non-volatile memory (NVM) application.
NASA Astrophysics Data System (ADS)
Nji, Jones; Li, Guoqiang
2012-02-01
The purpose of this study is to investigate the potential of a shape-memory-polymer (SMP)-based particulate composite to heal structural-length scale damage with small thermoplastic additive contents through a close-then-heal (CTH) self-healing scheme that was introduced in a previous study (Li and Uppu 2010 Comput. Sci. Technol. 70 1419-27). The idea is to achieve reasonable healing efficiencies with minimal sacrifice in structural load capacity. By first closing cracks, the gap between two crack surfaces is narrowed and a lesser amount of thermoplastic particles is required to achieve healing. The particulate composite was fabricated by dispersing copolyester thermoplastic particles in a shape memory polymer matrix. It is found that, for small thermoplastic contents of less than 10%, the CTH scheme followed in this study heals structural-length scale damage in the SMP particulate composite to a meaningful extent and with less sacrifice of structural capacity.
Filamentary model in resistive switching materials
NASA Astrophysics Data System (ADS)
Jasmin, Alladin C.
2017-12-01
The need for next generation computer devices is increasing as the demand for efficient data processing increases. The amount of data generated every second also increases which requires large data storage devices. Oxide-based memory devices are being studied to explore new research frontiers thanks to modern advances in nanofabrication. Various oxide materials are studied as active layers for non-volatile memory. This technology has potential application in resistive random-access-memory (ReRAM) and can be easily integrated in CMOS technologies. The long term perspective of this research field is to develop devices which mimic how the brain processes information. To realize such application, a thorough understanding of the charge transport and switching mechanism is important. A new perspective in the multistate resistive switching based on current-induced filament dynamics will be discussed. A simple equivalent circuit of the device gives quantitative information about the nature of the conducting filament at different resistance states.
Das, Ravi K.; Gale, Grace; Hennessy, Vanessa; Kamboj, Sunjeev K.
2018-01-01
Maladaptive reward memories (MRMs) can become unstable following retrieval under certain conditions, allowing their modification by subsequent new learning. However, robust (well-rehearsed) and chronologically old MRMs, such as those underlying substance use disorders, do not destabilize easily when retrieved. A key determinate of memory destabilization during retrieval is prediction error (PE). We describe a retrieval procedure for alcohol MRMs in hazardous drinkers that specifically aims to maximize the generation of PE and therefore the likelihood of MRM destabilization. The procedure requires explicitly generating the expectancy of alcohol consumption and then violating this expectancy (withholding alcohol) following the presentation of a brief set of prototypical alcohol cue images (retrieval + PE). Control procedures involve presenting the same cue images, but allow alcohol to be consumed, generating minimal PE (retrieval-no PE) or generate PE without retrieval of alcohol MRMs, by presenting orange juice cues (no retrieval + PE). Subsequently, we describe a multisensory disgust-based counterconditioning procedure to probe MRM destabilization by re-writing alcohol cue-reward associations prior to reconsolidation. This procedure pairs alcohol cues with images invoking pathogen disgust and an extremely bitter-tasting solution (denatonium benzoate), generating gustatory disgust. Following retrieval + PE, but not no retrieval + PE or retrieval-no PE, counterconditioning produces evidence of MRM rewriting as indexed by lasting reductions in alcohol cue valuation, attentional capture, and alcohol craving. PMID:29364255
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
The role of cue detection for prospective memory development across the lifespan.
Hering, Alexandra; Wild-Wall, Nele; Gajewski, Patrick D; Falkenstein, Michael; Kliegel, Matthias; Zinke, Katharina
2016-12-01
Behavioral findings suggest an inverted U-shaped pattern of prospective memory development across the lifespan. A key mechanism underlying this development is the ability to detect cues. We examined the influence of cue detection on prospective memory, combining behavioral and electrophysiological measures, in three age groups: adolescents (12-14 years), young (19-28 years), and old adults (66-77 years). Cue detection was manipulated by varying the distinctiveness (i.e., how easy it was to detect the cue based on color) of the prospective memory cue in a semantic judgment ongoing task. Behavioral results supported the pattern of an inverted U-shape with a pronounced prospective memory decrease in old adults. Adolescents and young adults showed a prospective memory specific modulation (larger amplitudes for the cues compared to other trials) already for the N1 component. No such specific modulation was evident in old adults for the early N1 component but only at the later P3b component. Adolescents showed differential modulations of the amplitude also for irrelevant information at the P3b, suggesting less efficient processing. In terms of conceptual implications, present findings underline the importance of cue detection for prospective remembering and reveal different developmental trajectories for cue detection. Our findings suggest that cue detection is not a unitary process but consists of multiple stages corresponding to several ERP components that differentially contribute to prospective memory performance across the lifespan. In adolescents resource allocation for detecting cues seemed successful initially but less efficient at later stages; whereas we found the opposite pattern for old adults. Copyright © 2016 Elsevier Ltd. All rights reserved.
Memory Applications Using Resonant Tunneling Diodes
NASA Astrophysics Data System (ADS)
Shieh, Ming-Huei
Resonant tunneling diodes (RTDs) producing unique folding current-voltage (I-V) characteristics have attracted considerable research attention due to their promising application in signal processing and multi-valued logic. The negative differential resistance of RTDs renders the operating points self-latching and stable. We have proposed a multiple -dimensional multiple-state RTD-based static random-access memory (SRAM) cell in which the number of stable states can significantly be increased to (N + 1)^ m or more for m number of N-peak RTDs connected in series. The proposed cells take advantage of the hysteresis and folding I-V characteristics of RTD. Several cell designs are presented and evaluated. A two-dimensional nine-state memory cell has been implemented and demonstrated by a breadboard circuit using two 2-peak RTDs. The hysteresis phenomenon in a series of RTDs is also further analyzed. The switch model provided in SPICE 3 can be utilized to simulate the hysteretic I-V characteristics of RTDs. A simple macro-circuit is described to model the hysteretic I-V characteristic of RTD for circuit simulation. A new scheme for storing word-wide multiple-bit information very efficiently in a single memory cell using RTDs is proposed. An efficient and inexpensive periphery circuit to read from and write into the cell is also described. Simulation results on the design of a 3-bit memory cell scheme using one-peak RTDs are also presented. Finally, a binary transistor-less memory cell which is only composed of a pair of RTDs and an ordinary rectifier diode is presented and investigated. A simple means for reading and writing information from or into the memory cell is also discussed.
Fiacconi, Chris M; Milliken, Bruce
2012-08-01
The purpose of the present study was to highlight the role of location-identity binding mismatches in obscuring explicit awareness of a strong contingency. In a spatial-priming procedure, we introduced a high likelihood of location-repeat trials. Experiments 1, 2a, and 2b demonstrated that participants' explicit awareness of this contingency was heavily influenced by the local match in location-identity bindings. In Experiment 3, we sought to determine why location-identity binding mismatches produce such low levels of contingency awareness. Our results suggest that binding mismatches can interfere substantially with visual-memory performance. We attribute the low levels of contingency awareness to participants' inability to remember the critical location-identity binding in the prime on a trial-to-trial basis. These results imply a close interplay between object files and visual working memory.
False memory and the associative network of happiness.
Koo, Minkyung; Oishi, Shigehiro
2009-02-01
This research examines the relationship between individuals' levels of life satisfaction and their associative networks of happiness. Study 1 measured European Americans' degree of false memory of happiness using the Deese-Roediger-McDermott paradigm. Scores on the Satisfaction With Life Scale predicted the likelihood of false memory of happiness but not of other lure words such as sleep . In Study 2, European American participants completed an association-judgment task in which they judged the extent to which happiness and each of 15 positive emotion terms were associated with each other. Consistent with Study 1's findings, chronically satisfied individuals exhibited stronger associations between happiness and other positive emotion terms than did unsatisfied individuals. However, Koreans and Asian Americans did not exhibit such a pattern regarding their chronic level of life satisfaction (Study 3). In combination, results suggest that there are important individual and cultural differences in the cognitive structure and associative network of happiness.
A versatile design for resonant guided-wave parametric down-conversion sources for quantum repeaters
NASA Astrophysics Data System (ADS)
Brecht, Benjamin; Luo, Kai-Hong; Herrmann, Harald; Silberhorn, Christine
2016-05-01
Quantum repeaters—fundamental building blocks for long-distance quantum communication—are based on the interaction between photons and quantum memories. The photons must fulfil stringent requirements on central frequency, spectral bandwidth and purity in order for this interaction to be efficient. We present a design scheme for monolithically integrated resonant photon-pair sources based on parametric down-conversion in nonlinear waveguides, which facilitate the generation of such photons. We investigate the impact of different design parameters on the performance of our source. The generated photon spectral bandwidths can be varied between several tens of MHz up to around 1 GHz, facilitating an efficient coupling to different memories. The central frequency of the generated photons can be coarsely tuned by adjusting the pump frequency, poling period and sample temperature, and we identify stability requirements on the pump laser and sample temperature that can be readily fulfilled with off-the-shelf components. We find that our source is capable of generating high-purity photons over a wide range of photon bandwidths. Finally, the PDC emission can be frequency fine-tuned over several GHz by simultaneously adjusting the sample temperature and pump frequency. We conclude our study with demonstrating the adaptability of our source to different quantum memories.
Network-based model of the growth of termite nests
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Perna, Andrea; Fortunato, Santo; Darrouzet, Eric; Theraulaz, Guy; Jost, Christian
2015-12-01
We present a model for the growth of the transportation network inside nests of the social insect subfamily Termitinae (Isoptera, termitidae). These nests consist of large chambers (nodes) connected by tunnels (edges). The model based on the empirical analysis of the real nest networks combined with pruning (edge removal, either random or weighted by betweenness centrality) and a memory effect (preferential growth from the latest added chambers) successfully predicts emergent nest properties (degree distribution, size of the largest connected component, average path lengths, backbone link ratios, and local graph redundancy). The two pruning alternatives can be associated with different genuses in the subfamily. A sensitivity analysis on the pruning and memory parameters indicates that Termitinae networks favor fast internal transportation over efficient defense strategies against ant predators. Our results provide an example of how complex network organization and efficient network properties can be generated from simple building rules based on local interactions and contribute to our understanding of the mechanisms that come into play for the formation of termite networks and of biological transportation networks in general.
Romero-Martínez, A; González-Bono, E; Salvador, A; Moya-Albiol, L
2016-01-01
Caring for offspring diagnosed with a chronic psychological disorder such as autism spectrum disorder (ASD) is used in research as a model of chronic stress. This chronic stress has been reported to have deleterious effects on caregivers' cognition, particularly in verbal declarative memory. Moreover, such cognitive decline may be mediated by testosterone (T) levels and negative affect, understood as depressive mood together with high anxiety and anger. This study aimed to compare declarative memory function in middle-aged women who were caregivers for individuals with ASD (n = 24; mean age = 45) and female controls (n = 22; mean age = 45), using a standardised memory test (Rey's Auditory Verbal Learning Test). It also sought to examine the role of care recipient characteristics, negative mood and T levels in memory impairments. ASD caregivers were highly sensitive to proactive interference and verbal forgetting. In addition, they had higher negative affect and T levels, both of which have been associated with poorer verbal memory performance. Moreover, the number of years of caregiving affected memory performance and negative affect, especially, in terms of anger feelings. On the other hand, T levels in caregivers had a curvilinear relationship with verbal memory performance; that is, increases in T were associated with improvements in verbal memory performance up to a certain point, but subsequently, memory performance decreased with increasing T. Chronic stress may produce disturbances in mood and hormonal levels, which in turn might increase the likelihood of developing declarative memory impairments although caregivers do not show a generalised decline in memory. These findings should be taken into account for understanding the impact of cognitive impairments on the ability to provide optimal caregiving.
A 3D approximate maximum likelihood localization solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-09-23
A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.
Simpson, Jared
2018-01-24
Wellcome Trust Sanger Institute's Jared Simpson on Memory efficient sequence analysis using compressed data structures at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
Creative Classroom Assignment Through Database Management.
ERIC Educational Resources Information Center
Shah, Vivek; Bryant, Milton
1987-01-01
The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
ClimateSpark: An in-memory distributed computing framework for big climate data analytics
NASA Astrophysics Data System (ADS)
Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei
2018-06-01
The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.
NASA Astrophysics Data System (ADS)
Rodionov, A. A.; Turchin, V. I.
2017-06-01
We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.
Low lifetime stress exposure is associated with reduced stimulus–response memory
Goldfarb, Elizabeth V.; Shields, Grant S.; Daw, Nathaniel D.; Slavich, George M.; Phelps, Elizabeth A.
2017-01-01
Exposure to stress throughout life can cumulatively influence later health, even among young adults. The negative effects of high cumulative stress exposure are well-known, and a shift from episodic to stimulus–response memory has been proposed to underlie forms of psychopathology that are related to high lifetime stress. At the other extreme, effects of very low stress exposure are mixed, with some studies reporting that low stress leads to better outcomes, while others demonstrate that low stress is associated with diminished resilience and negative outcomes. However, the influence of very low lifetime stress exposure on episodic and stimulus–response memory is unknown. Here we use a lifetime stress assessment system (STRAIN) to assess cumulative lifetime stress exposure and measure memory performance in young adults reporting very low and moderate levels of lifetime stress exposure. Relative to moderate levels of stress, very low levels of lifetime stress were associated with reduced use and retention (24 h later) of stimulus–response (SR) associations, and a higher likelihood of using context memory. Further, computational modeling revealed that participants with low levels of stress exhibited worse expression of memory for SR associations than those with moderate stress. These results demonstrate that very low levels of stress exposure can have negative effects on cognition. PMID:28298555
Source memory that encoding was self-referential: the influence of stimulus characteristics.
Durbin, Kelly A; Mitchell, Karen J; Johnson, Marcia K
2017-10-01
Decades of research suggest that encoding information with respect to the self improves memory (self-reference effect, SRE) for items (item SRE). The current study focused on how processing information in reference to the self affects source memory for whether an item was self-referentially processed (a source SRE). Participants self-referentially or non-self-referentially encoded words (Experiment 1) or pictures (Experiment 2) that varied in valence (positive, negative, neutral). Relative to non-self-referential processing, self-referential processing enhanced item recognition for all stimulus types (an item SRE), but it only enhanced source memory for positive words (a source SRE). In fact, source memory for negative and neutral pictures was worse for items processed self-referentially than non-self-referentially. Together, the results suggest that item SRE and source SRE (e.g., remembering an item was encoded self-referentially) are not necessarily the same across stimulus types (e.g., words, pictures; positive, negative). While an item SRE may depend on the overall likelihood the item generates any association, the enhancing effects of self-referential processing on source memory for self-referential encoding may depend on how embedded a stimulus becomes in one's self-schema, and that depends, in part, on the stimulus' valence and format. Self-relevance ratings during encoding provide converging evidence for this interpretation.
Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing
2014-01-01
To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Coherent spin control of a nanocavity-enhanced qubit in diamond
Li, Luozhou; Lu, Ming; Schroder, Tim; ...
2015-01-28
A central aim of quantum information processing is the efficient entanglement of multiple stationary quantum memories via photons. Among solid-state systems, the nitrogen-vacancy centre in diamond has emerged as an excellent optically addressable memory with second-scale electron spin coherence times. Recently, quantum entanglement and teleportation have been shown between two nitrogen-vacancy memories, but scaling to larger networks requires more efficient spin-photon interfaces such as optical resonators. Here we report such nitrogen-vacancy nanocavity systems in strong Purcell regime with optical quality factors approaching 10,000 and electron spin coherence times exceeding 200 µs using a silicon hard-mask fabrication process. This spin-photon interfacemore » is integrated with on-chip microwave striplines for coherent spin control, providing an efficient quantum memory for quantum networks.« less
Booster vaccinations: can immunologic memory outpace disease pathogenesis?
Pichichero, Michael E
2009-12-01
Almost all current vaccines work by the induction of antibodies in serum or on the mucosa to block adherence of pathogens to epithelial cells or interfere with microbial invasion of the bloodstream. However, antibody levels usually decline after vaccination to undetectable amounts if further vaccination does not occur. Persistence of vaccine-induced antibodies usually goes well beyond the time when they should have decayed to undetectable levels because of ongoing "natural" boosting or other immunologic mechanisms. The production of memory B and T cells is of clear importance, but the likelihood that a memory response will be fast enough in the absence of a protective circulating antibody level likely depends on the pace of pathogenesis of a specific organism. This concept is discussed with regard to Haemophilus influenzae type b, Streptococcus pneumoniae, and Neisseria meningitidis; hepatitis A and B; diphtheria, tetanus, and pertussis; polio, measles, mumps, rubella, and varicella; rotavirus; and human papilloma virus. With infectious diseases for which the pace of pathogenesis is less rapid, some individuals will contract infection before the memory response is fully activated and implemented. With infectious diseases for which the pace of pathogenesis is slow, immune memory should be sufficient to prevent disease.
The time course of ventrolateral prefrontal cortex involvement in memory formation.
Machizawa, Maro G; Kalla, Roger; Walsh, Vincent; Otten, Leun J
2010-03-01
Human neuroimaging studies have implicated a number of brain regions in long-term memory formation. Foremost among these is ventrolateral prefrontal cortex. Here, we used double-pulse transcranial magnetic stimulation (TMS) to assess whether the contribution of this part of cortex is crucial for laying down new memories and, if so, to examine the time course of this process. Healthy adult volunteers performed an incidental encoding task (living/nonliving judgments) on sequences of words. In separate series, the task was performed either on its own or while TMS was applied to one of two sites of experimental interest (left/right anterior inferior frontal gyrus) or a control site (vertex). TMS pulses were delivered at 350, 750, or 1,150 ms following word onset. After a delay of 15 min, memory for the items was probed with a recognition memory test including confidence judgments. TMS to all three sites nonspecifically affected the speed and accuracy with which judgments were made during the encoding task. However, only TMS to prefrontal cortex affected later memory performance. Stimulation of left or right inferior frontal gyrus at all three time points reduced the likelihood that a word would later be recognized by a small, but significant, amount (approximately 4%). These findings indicate that bilateral ventrolateral prefrontal cortex plays an essential role in memory formation, exerting its influence between > or = 350 and 1,150 ms after an event is encountered.
Hippocampal interictal epileptiform activity disrupts cognition in humans
Kleen, Jonathan K.; Scott, Rod C.; Holmes, Gregory L.; Roberts, David W.; Rundle, Melissa M.; Testorf, Markus; Lenck-Santini, Pierre-Pascal
2013-01-01
Objective: We investigated whether interictal epileptiform discharges (IED) in the human hippocampus are related to impairment of specific memory processes, and which characteristics of hippocampal IED are most associated with memory dysfunction. Methods: Ten patients had depth electrodes implanted into their hippocampi for preoperative seizure localization. EEG was recorded during 2,070 total trials of a short-term memory task, with memory processing categorized into encoding, maintenance, and retrieval. The influence of hippocampal IED on these processes was analyzed and adjusted to account for individual differences between patients. Results: Hippocampal IED occurring in the memory retrieval period decreased the likelihood of a correct response when they were contralateral to the seizure focus (p < 0.05) or bilateral (p < 0.001). Bilateral IED during the memory maintenance period had a similar effect (p < 0.01), particularly with spike-wave complexes of longer duration (p < 0.01). IED during encoding had no effect, and reaction time was also unaffected by IED. Conclusions: Hippocampal IED in humans may disrupt memory maintenance and retrieval, but not encoding. The particular effects of bilateral IED and those contralateral to the seizure focus may relate to neural compensation in the more functional hemisphere. This study provides biological validity to animal models in the study of IED-related transient cognitive impairment. Moreover, it strengthens the argument that IED may contribute to cognitive impairment in epilepsy depending upon when and where they occur. PMID:23685931
Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian
2018-05-12
Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.
Gold, currencies and market efficiency
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav; Vosvrda, Miloslav
2016-05-01
Gold and currency markets form a unique pair with specific interactions and dynamics. We focus on the efficiency ranking of gold markets with respect to the currency of purchase. By utilizing the Efficiency Index (EI) based on fractal dimension, approximate entropy and long-term memory on a wide portfolio of 142 gold price series for different currencies, we construct the efficiency ranking based on the extended EI methodology we provide. Rather unexpected results are uncovered as the gold prices in major currencies lay among the least efficient ones whereas very minor currencies are among the most efficient ones. We argue that such counterintuitive results can be partly attributed to a unique period of examination (2011-2014) characteristic by quantitative easing and rather unorthodox monetary policies together with the investigated illegal collusion of major foreign exchange market participants, as well as some other factors discussed in some detail.
NASA Astrophysics Data System (ADS)
Shi, K. X.; Xu, H. Y.; Wang, Z. Q.; Zhao, X. N.; Liu, W. Z.; Ma, J. G.; Liu, Y. C.
2017-11-01
Resistive-switching memory with ultralow-power consumption is very promising technology for next-generation data storage and high-energy-efficiency neurosynaptic chips. Herein, Ta2O5-x-based multilevel memories with ultralow-power consumption and good data retention were achieved by simple Gd-doping. The introduction of a Gd ion, as an oxygen trapper, not only suppresses the generation of oxygen vacancy defects and greatly increases the Ta2O5-x resistance but also increases the oxygen-ion migration barrier. As a result, the memory cells can operate at an ultralow current of 1 μA with the extrapolated retention time of >10 years at 85 °C and the high switching speeds of 10 ns/40 ns for SET/RESET processes. The energy consumption of the device is as low as 60 fJ/bit, which is comparable to emerging ultralow-energy consumption (<100 fJ/bit) memory devices.
NASA Astrophysics Data System (ADS)
Bashash, Saeid; Jalili, Nader
2007-02-01
Piezoelectrically-driven nanostagers have limited performance in a variety of feedforward and feedback positioning applications because of their nonlinear hysteretic response to input voltage. The hysteresis phenomenon is well known for its complex and multi-path behavior. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligence properties of hysteresis with the effects of non-local memories are discussed here. Through performing a set of experiments on a piezoelectrically-driven nanostager with a high resolution capacitive position sensor, it is shown that for the precise prediction of the hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of the hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the ever-present nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect, if memory units are sufficiently chosen for the inverse model.
Feedforward hysteresis compensation in trajectory control of piezoelectrically-driven nanostagers
NASA Astrophysics Data System (ADS)
Bashash, Saeid; Jalili, Nader
2006-03-01
Complex structural nonlinearities of piezoelectric materials drastically degrade their performance in variety of micro- and nano-positioning applications. From the precision positioning and control perspective, the multi-path time-history dependent hysteresis phenomenon is the most concerned nonlinearity in piezoelectric actuators to be analyzed. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligent properties of hysteresis with the effects of non-local memories are discussed. Through performing a set of experiments on a piezoelectrically-driven nanostager with high resolution capacitive position sensor, it is shown that for the precise prediction of hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the system everpresent nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect if memory units are sufficiently chosen for the inverse model.
Spin-photon interface and spin-controlled photon switching in a nanobeam waveguide
NASA Astrophysics Data System (ADS)
Javadi, Alisa; Ding, Dapeng; Appel, Martin Hayhurst; Mahmoodian, Sahand; Löbl, Matthias Christian; Söllner, Immo; Schott, Rüdiger; Papon, Camille; Pregnolato, Tommaso; Stobbe, Søren; Midolo, Leonardo; Schröder, Tim; Wieck, Andreas Dirk; Ludwig, Arne; Warburton, Richard John; Lodahl, Peter
2018-05-01
The spin of an electron is a promising memory state and qubit. Connecting spin states that are spatially far apart will enable quantum nodes and quantum networks based on the electron spin. Towards this goal, an integrated spin-photon interface would be a major leap forward as it combines the memory capability of a single spin with the efficient transfer of information by photons. Here, we demonstrate such an efficient and optically programmable interface between the spin of an electron in a quantum dot and photons in a nanophotonic waveguide. The spin can be deterministically prepared in the ground state with a fidelity of up to 96%. Subsequently, the system is used to implement a single-spin photonic switch, in which the spin state of the electron directs the flow of photons through the waveguide. The spin-photon interface may enable on-chip photon-photon gates, single-photon transistors and the efficient generation of a photonic cluster state.
Xiong, Lilin; Huang, Xiao; Li, Jie; Mao, Peng; Wang, Xiang; Wang, Rubing; Tang, Meng
2018-06-13
Indoor physical environments appear to influence learning efficiency nowadays. For improvement in learning efficiency, environmental scenarios need to be designed when occupants engage in different learning tasks. However, how learning efficiency is affected by indoor physical environment based on task types are still not well understood. The present study aims to explore the impacts of three physical environmental factors (i.e., temperature, noise, and illuminance) on learning efficiency according to different types of tasks, including perception, memory, problem-solving, and attention-oriented tasks. A 3 × 4 × 3 full factorial design experiment was employed in a university classroom with 10 subjects recruited. Environmental scenarios were generated based on different levels of temperature (17 °C, 22 °C, and 27 °C), noise (40 dB(A), 50 dB(A), 60 dB(A), and 70 dB(A)) and illuminance (60 lx, 300 lx, and 2200 lx). Accuracy rate (AC), reaction time (RT), and the final performance indicator (PI) were used to quantify learning efficiency. The results showed ambient temperature, noise, and illuminance exerted significant main effect on learning efficiency based on four task types. Significant concurrent effects of the three factors on final learning efficiency was found in all tasks except problem-solving-oriented task. The optimal environmental scenarios for top learning efficiency were further identified under different environmental interactions. The highest learning efficiency came in thermoneutral, relatively quiet, and bright conditions in perception-oriented task. Subjects performed best under warm, relatively quiet, and moderately light exposure when recalling images in the memory-oriented task. Learning efficiency peaked to maxima in thermoneutral, fairly quiet, and moderately light environment in problem-solving process while in cool, fairly quiet and bright environment with regard to attention-oriented task. The study provides guidance for building users to conduct effective environmental intervention with simultaneous controls of ambient temperature, noise, and illuminance. It contributes to creating the most suitable indoor physical environment for improving occupants learning efficiency according to different task types. The findings could further supplement the present indoor environment-related standards or norms with providing empirical reference on environmental interactions.
An efficient photogrammetric stereo matching method for high-resolution images
NASA Astrophysics Data System (ADS)
Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao
2016-12-01
Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.
Improving estimates of genetic maps: a meta-analysis-based approach.
Stewart, William C L
2007-07-01
Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.
pyCTQW: A continuous-time quantum walk simulator on distributed memory computers
NASA Astrophysics Data System (ADS)
Izaac, Josh A.; Wang, Jingbo B.
2015-01-01
In the general field of quantum information and computation, quantum walks are playing an increasingly important role in constructing physical models and quantum algorithms. We have recently developed a distributed memory software package pyCTQW, with an object-oriented Python interface, that allows efficient simulation of large multi-particle CTQW (continuous-time quantum walk)-based systems. In this paper, we present an introduction to the Python and Fortran interfaces of pyCTQW, discuss various numerical methods of calculating the matrix exponential, and demonstrate the performance behavior of pyCTQW on a distributed memory cluster. In particular, the Chebyshev and Krylov-subspace methods for calculating the quantum walk propagation are provided, as well as methods for visualization and data analysis.
Nanophotonic photon echo memory based on rare-earth-doped crystals
NASA Astrophysics Data System (ADS)
Zhong, Tian; Kindem, Jonathan; Miyazono, Evan; Faraon, Andrei; Caltech nano quantum optics Team
2015-03-01
Rare earth ions (REIs) are promising candidates for implementing solid-state quantum memories and quantum repeater devices. Their high spectral stability and long coherence times make REIs a good choice for integration in an on-chip quantum nano-photonic platform. We report the coupling of the 883 nm transition of Neodymium (Nd) to a Yttrium orthosilicate (YSO) photonic crystal nano-beam resonator, achieving Purcell enhanced spontaneous emission by 21 times and increased optical absorption. Photon echoes were observed in nano-beams of different doping concentrations, yielding optical coherence times T2 up to 80 μs that are comparable to unprocessed bulk samples. This indicates the remarkable coherence properties of Nd are preserved during nanofabrication, therefore opening the possibility of efficient on-chip optical quantum memories. The nano-resonator with mode volume of 1 . 6(λ / n) 3 was fabricated using focused ion beam, and a quality factor of 3200 was measured. Purcell enhanced absorption of 80% by an ensemble of ~ 1 × 106 ions in the resonator was measured, which fulfills the cavity impedance matching condition that is necessary to achieve quantum storage of photons with unity efficiency.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
Nonparametric spirometry reference values for Hispanic Americans.
Glenn, Nancy L; Brown, Vanessa M
2011-02-01
Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.
NASA Astrophysics Data System (ADS)
Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun
2014-12-01
The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly improved. As a result of using PCC and HDM, experiments running mainstream video decoding programs achieved performance improvements of 13.57%-19.48% when there was a reasonable memory size. Therefore, 1080p@35.7fps for H.264 high profile video decoding can be achieved on PCC and HDM architecture when utilizing a 200 MHz working frequency. Further, the size of the on-chip context memory no longer restricted complex applications, which were efficiently executed on the PCC and HDM architecture.
Some comments on Hurst exponent and the long memory processes on capital markets
NASA Astrophysics Data System (ADS)
Sánchez Granero, M. A.; Trinidad Segovia, J. E.; García Pérez, J.
2008-09-01
The analysis of long memory processes in capital markets has been one of the topics in finance, since the existence of the market memory could implicate the rejection of an efficient market hypothesis. The study of these processes in finance is realized through Hurst exponent and the most classical method applied is R/S analysis. In this paper we will discuss the efficiency of this methodology as well as some of its more important modifications to detect the long memory. We also propose the application of a classical geometrical method with short modifications and we compare both approaches.
A ripple-spreading genetic algorithm for the aircraft sequencing problem.
Hu, Xiao-Bing; Di Paolo, Ezequiel A
2011-01-01
When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.
Crane, Catherine; Heron, Jon; Gunnell, David; Lewis, Glyn; Evans, Jonathan; Williams, J. Mark G.
2014-01-01
Background Overgeneral autobiographical memory has repeatedly been identified as a risk factor for adolescent and adult psychopathology but the factors that cause such over-generality remain unclear. This study examined the association between childhood exposure to traumatic events and early adolescent overgeneral autobiographical memory in a large population sample. Methods Thirteen-year-olds, n = 5,792, participating in an ongoing longitudinal cohort study (ALSPAC) completed a written version of the Autobiographical Memory Test. Performance on this task was examined in relation to experience of traumatic events, using data recorded by caregivers close to the time of exposure. Results Results indicated that experiencing a severe event in middle childhood increased the likelihood of an adolescent falling into the lowest quartile for autobiographical memory specificity (retrieving 0 or 1 specific memory) at age 13 by approximately 60%. The association persisted after controlling for a range of potential socio-demographic confounders. Limitations Data on the traumatic event exposures was limited by the relatively restricted range of traumas examined, and the lack of contextual details surrounding both the traumatic event exposures themselves and the severity of children's post-traumatic stress reactions. Conclusions This is the largest study to date of the association between childhood trauma exposure and overgeneral autobiographical memory in adolescence. Findings suggest a modest association between exposure to traumatic events and later overgeneral autobiographical memory, a psychological variable that has been linked to vulnerability to clinical depression. PMID:24657714
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Low-memory iterative density fitting.
Grajciar, Lukáš
2015-07-30
A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.
Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka
2018-06-01
The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR
NASA Astrophysics Data System (ADS)
Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen
2013-03-01
Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.
Energy efficient hybrid computing systems using spin devices
NASA Astrophysics Data System (ADS)
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments
Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J
2014-01-01
Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576
Motor Action and Emotional Memory
ERIC Educational Resources Information Center
Casasanto, Daniel; Dijkstra, Katinka
2010-01-01
Can simple motor actions affect how efficiently people retrieve emotional memories, and influence what they choose to remember? In Experiment 1, participants were prompted to retell autobiographical memories with either positive or negative valence, while moving marbles either upward or downward. They retrieved memories faster when the direction…
An extended continuum model considering optimal velocity change with memory and numerical tests
NASA Astrophysics Data System (ADS)
Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng
2018-01-01
In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.
Sleep Dependent Memory Consolidation in Children with Autism Spectrum Disorder.
Maski, Kiran; Holbrook, Hannah; Manoach, Dara; Hanson, Ellen; Kapur, Kush; Stickgold, Robert
2015-12-01
Examine the role of sleep in the consolidation of declarative memory in children with autism spectrum disorder (ASD). Case-control study. Home-based study with sleep and wake conditions. Twenty-two participants with ASD and 20 control participants between 9 and 16 y of age. Participants were trained to criterion on a spatial declarative memory task and then given a cued recall test. Retest occurred after a period of daytime wake (Wake) or a night of sleep (Sleep) with home-based polysomnography; Wake and Sleep conditions were counterbalanced. Children with ASD had poorer sleep efficiency than controls, but other sleep macroarchitectural and microarchitectural measures were comparable after controlling for age and medication use. Both groups demonstrated better memory consolidation across Sleep than Wake, although participants with ASD had poorer overall memory consolidation than controls. There was no interaction between group and condition. The change in performance across sleep, independent of medication and age, showed no significant relationships with any specific sleep parameters other than total sleep time and showed a trend toward less forgetting in the control group. This study shows that despite their more disturbed sleep quality, children with autism spectrum disorder (ASD) still demonstrate more stable memory consolidation across sleep than in wake conditions. The findings support the importance of sleep for stabilizing memory in children with and without neurodevelopmental disabilities. Our results suggest that improving sleep quality in children with ASD could have direct benefits to improving their overall cognitive functioning. © 2015 Associated Professional Sleep Societies, LLC.
Spencer, Robert J; Reckow, Jaclyn; Drag, Lauren L; Bieliauskas, Linas A
2016-12-01
We assessed the validity of a brief incidental learning measure based on the Similarities and Vocabulary subtests of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Most neuropsychological assessments for memory require intentional learning, but incidental learning occurs without explicit instruction. Incidental memory tests such as the WAIS-III Symbol Digit Coding subtest have existed for many years, but few memory studies have used a semantically processed incidental learning model. We conducted a retrospective analysis of 37 veterans with traumatic brain injury, referred for outpatient neuropsychological testing at a Veterans Affairs hospital. As part of their evaluation, the participants completed the incidental learning tasks. We compared their incidental learning performance to their performance on traditional memory measures. Incidental learning scores correlated strongly with scores on the California Verbal Learning Test-Second Edition (CVLT-II) and Brief Visuospatial Memory Test-Revised (BVMT-R). After we conducted a partial correlation that controlled for the effects of age, incidental learning correlated significantly with the CVLT-II Immediate Free Recall, CVLT-II Short-Delay Recall, CVLT-II Long-Delay Recall, and CVLT-II Yes/No Recognition Hits, and with the BVMT-R Delayed Recall and BVMT-R Recognition Discrimination Index. Our incidental learning procedures derived from subtests of the WAIS-IV Edition are an efficient and valid way of measuring memory. These tasks add minimally to testing time and capitalize on the semantic encoding that is inherent in completing the Similarities and Vocabulary subtests.
Vicarious extinction learning during reconsolidation neutralizes fear memory.
Golkar, Armita; Tjaden, Cathelijn; Kindt, Merel
2017-05-01
Previous studies have suggested that fear memories can be updated when recalled, a process referred to as reconsolidation. Given the beneficial effects of model-based safety learning (i.e. vicarious extinction) in preventing the recovery of short-term fear memory, we examined whether consolidated long-term fear memories could be updated with safety learning accomplished through vicarious extinction learning initiated within the reconsolidation time-window. We assessed this in a final sample of 19 participants that underwent a three-day within-subject fear-conditioning design, using fear-potentiated startle as our primary index of fear learning. On day 1, two fear-relevant stimuli (reinforced CSs) were paired with shock (US) and a third stimulus served as a control (CS). On day 2, one of the two previously reinforced stimuli (the reminded CS) was presented once in order to reactivate the fear memory 10 min before vicarious extinction training was initiated for all CSs. The recovery of the fear memory was tested 24 h later. Vicarious extinction training conducted within the reconsolidation time window specifically prevented the recovery of the reactivated fear memory (p = 0.03), while leaving fear-potentiated startle responses to the non-reactivated cue intact (p = 0.62). These findings are relevant to both basic and clinical research, suggesting that a safe, non-invasive model-based exposure technique has the potential to enhance the efficiency and durability of anxiolytic therapies. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Arend, Anna M.; Zimmer, Hubert D.
2012-01-01
In this training study, we aimed to selectively train participants' filtering mechanisms to enhance visual working memory (WM) efficiency. The highly restricted nature of visual WM capacity renders efficient filtering mechanisms crucial for its successful functioning. Filtering efficiency in visual WM can be measured via the lateralized change…