Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.
Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David
2017-04-12
Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.
NASA Astrophysics Data System (ADS)
Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van
2013-12-01
The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot
Raman, Rajani; Sarkar, Sandip
2016-01-01
Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812
Decision-making in schizophrenia: A predictive-coding perspective.
Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas
2018-05-31
Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.
Neural Elements for Predictive Coding.
Shipp, Stewart
2016-01-01
Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural engineering in the mouse. The exercise highlights a number of recurring themes, amongst them the consideration of interneuron diversity as a spur to theoretical development and the potential for specifying a pyramidal neuron's function by its individual 'connectome,' combining its extrinsic projection (forward, backward or subcortical) with evaluation of its intrinsic network (e.g., unidirectional versus bidirectional connections with other pyramidal neurons).
Neural Elements for Predictive Coding
Shipp, Stewart
2016-01-01
Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural engineering in the mouse. The exercise highlights a number of recurring themes, amongst them the consideration of interneuron diversity as a spur to theoretical development and the potential for specifying a pyramidal neuron’s function by its individual ‘connectome,’ combining its extrinsic projection (forward, backward or subcortical) with evaluation of its intrinsic network (e.g., unidirectional versus bidirectional connections with other pyramidal neurons). PMID:27917138
Laminar fMRI and computational theories of brain function.
Stephan, K E; Petzschner, F H; Kasper, L; Bayer, J; Wellstein, K V; Stefanics, G; Pruessmann, K P; Heinzle, J
2017-11-02
Recently developed methods for functional MRI at the resolution of cortical layers (laminar fMRI) offer a novel window into neurophysiological mechanisms of cortical activity. Beyond physiology, laminar fMRI also offers an unprecedented opportunity to test influential theories of brain function. Specifically, hierarchical Bayesian theories of brain function, such as predictive coding, assign specific computational roles to different cortical layers. Combined with computational models, laminar fMRI offers a unique opportunity to test these proposals noninvasively in humans. This review provides a brief overview of predictive coding and related hierarchical Bayesian theories, summarises their predictions with regard to layered cortical computations, examines how these predictions could be tested by laminar fMRI, and considers methodological challenges. We conclude by discussing the potential of laminar fMRI for clinically useful computational assays of layer-specific information processing. Copyright © 2017 Elsevier Inc. All rights reserved.
Disruption of hierarchical predictive coding during sleep
Strauss, Melanie; Sitt, Jacobo D.; King, Jean-Remi; Elbaz, Maxime; Azizi, Leila; Buiatti, Marco; Naccache, Lionel; van Wassenhove, Virginie; Dehaene, Stanislas
2015-01-01
When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding. PMID:25737555
Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.
2002-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
On a High-Fidelity Hierarchical Approach to Buckling Load Calculations
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.
Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian
2018-02-23
Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
NASA Astrophysics Data System (ADS)
Basak, Subhash C.; Mills, Denise; Hawkins, Douglas M.
2008-06-01
A hierarchical classification study was carried out based on a set of 70 chemicals—35 which produce allergic contact dermatitis (ACD) and 35 which do not. This approach was implemented using a regular ridge regression computer code, followed by conversion of regression output to binary data values. The hierarchical descriptor classes used in the modeling include topostructural (TS), topochemical (TC), and quantum chemical (QC), all of which are based solely on chemical structure. The concordance, sensitivity, and specificity are reported. The model based on the TC descriptors was found to be the best, while the TS model was extremely poor.
Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.
Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao
2011-12-01
In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE
Zhang, Jingpu; Zhang, Zuping; Wang, Zixiang; Liu, Yuting; Deng, Lei
2018-05-15
Long non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs. Considering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods. The source code and data are available at http://denglab.org/NeuraNetL2GO/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.
Predictive Coding in Area V4: Dynamic Shape Discrimination under Partial Occlusion
Choi, Hannah; Pasupathy, Anitha; Shea-Brown, Eric
2018-01-01
The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion. After a delay, V4 responses combine both feedforward input and feedback signals from the PFC; the latter reflect predictions made by PFC about the visual stimulus underlying V4 activity. We find that this model captures key features of V4 and PFC dynamics observed in experiments. Specifically, PFC responses are strongest for occluded stimuli and delayed responses in V4 are less sensitive to occlusion, supporting our hypothesis that the feedback signals from PFC underlie robust discrimination of occluded shapes. Thus, our study proposes that area V4 and PFC participate in hierarchical inference, with feedback signals encoding top-down predictions about occluded shapes. PMID:29566355
Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?
Heilbron, Micha; Chait, Maria
2017-08-04
Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
An Interoceptive Predictive Coding Model of Conscious Presence
Seth, Anil K.; Suzuki, Keisuke; Critchley, Hugo D.
2011-01-01
We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness. PMID:22291673
2015-09-02
human behavior. In this project, we hypothesized that visual memory of past motion trajectories may be used for selecting future behavior. In other...34Decoding sequence of actions using fMRI ", Society for Neuroscience Annual Meeting, San Diego, CA, USA, Nov 9-13 2013 (only abstract) 3. Hansol Choi, Dae...Shik Kim, "Planning as inference in a Hierarchical Predictive Memory ", Proceedings of International Conference on Neural Information Processing
Canonical microcircuits for predictive coding
Bastos, Andre M.; Usrey, W. Martin; Adams, Rick A.; Mangun, George R.; Fries, Pascal; Friston, Karl J.
2013-01-01
Summary This review considers the influential notion of a canonical (cortical) microcircuit in light of recent theories about neuronal processing. Specifically, we conciliate quantitative studies of microcircuitry and the functional logic of neuronal computations. We revisit the established idea that message passing among hierarchical cortical areas implements a form of Bayesian inference – paying careful attention to the implications for intrinsic connections among neuronal populations. By deriving canonical forms for these computations, one can associate specific neuronal populations with specific computational roles. This analysis discloses a remarkable correspondence between the microcircuitry of the cortical column and the connectivity implied by predictive coding. Furthermore, it provides some intuitive insights into the functional asymmetries between feedforward and feedback connections and the characteristic frequencies over which they operate. PMID:23177956
[Thermodynamics of the origin of life, evolution and aging].
Gladyshev, G P
2014-01-01
Briefly discusses the history of the search of thermodynamic approach to explain the origin of life, evolution and aging of living beings. The origin of life is the result of requirement by the quasi-equilibrium hierarchical thermodynamics, in particular, the supramolecular thermodynamics. The evolution and aging of living beings is accompanied with changes of chemical and supramolecular compositions of living bodies, as well as with changes in the composition and structure of all hierarchies of the living world. The thermodynamic principle of substance stability predicts the existence of a single genetic code in our universe. The thermodynamic theory optimizes physiology and medicine and recommends antiaging diets and medicines. Hierarchical thermodynamics forms the design diversity of culture and art. The thermodynamic theory of origin of life, evolution and aging is the development of Clausius-Gibbs thermodynamics. Hierarchical thermodynamics is the mirror of Darwin-Wallace's-theory.
Hierarchical Models for Type Ia Supernova Light Curves in the Optical and Near Infrared
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Narayan, G.; Kirshner, R. P.
2011-01-01
I have constructed a comprehensive statistical model for Type Ia supernova optical and near infrared light curves. Since the near infrared light curves are excellent standard candles and are less sensitive to dust extinction and reddening, the combination of near infrared and optical data better constrains the host galaxy extinction and improves the precision of distance predictions to SN Ia. A hierarchical probabilistic model coherently accounts for multiple random and uncertain effects, including photometric error, intrinsic supernova light curve variations and correlations across phase and wavelength, dust extinction and reddening, peculiar velocity dispersion and distances. An improved BayeSN MCMC code is implemented for computing probabilistic inferences for individual supernovae and the SN Ia and host galaxy dust populations. I use this hierarchical model to analyze nearby Type Ia supernovae with optical and near infared data from the PAIRITEL, CfA3, and CSP samples and the literature. Using cross-validation to test the robustness of the model predictions, I find that the rms Hubble diagram scatter of predicted distance moduli is 0.11 mag for SN with optical and near infrared data versus 0.15 mag for SN with only optical data. Accounting for the dispersion expected from random peculiar velocities, the rms intrinsic prediction error is 0.08-0.10 mag for SN with both optical and near infrared light curves. I discuss results for the inferred intrinsic correlation structures of the optical-NIR SN Ia light curves and the host galaxy dust distribution captured by the hierarchical model. The continued observation and analysis of Type Ia SN in the optical and near infrared is important for improving their utility as precise and accurate cosmological distance indicators.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Can, Dilara Deniz; Ginsburg-Block, Marika; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn
2013-09-01
This longitudinal study examined the predictive validity of the MacArthur Communicative Developmental Inventories-Short Form (CDI-SF), a parent report questionnaire about children's language development (Fenson, Pethick, Renda, Cox, Dale & Reznick, 2000). Data were first gathered from parents on the CDI-SF vocabulary scores for seventy-six children (mean age=1 ; 10). Four years later (mean age=6 ; 1), children were assessed on language outcomes (expressive vocabulary, syntax, semantics and pragmatics) and code-related skills, including phonemic awareness, word recognition and decoding skills. Hierarchical regression analyses revealed that early expressive vocabulary accounted for 17% of the variance in picture vocabulary, 11% of the variance in syntax, and 7% of the variance in semantics, while not accounting for any variance in pragmatics in kindergarten. CDI-SF scores did not predict code-related skills in kindergarten. The importance of early vocabulary skills for later language development and CDI-SF as a valuable research tool are discussed.
Vuust, Peter; Witek, Maria A. G.
2014-01-01
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
Judge, Timothy A; Rodell, Jessica B; Klinger, Ryan L; Simon, Lauren S; Crawford, Eean R
2013-11-01
Integrating 2 theoretical perspectives on predictor-criterion relationships, the present study developed and tested a hierarchical framework in which each five-factor model (FFM) personality trait comprises 2 DeYoung, Quilty, and Peterson (2007) facets, which in turn comprise 6 Costa and McCrae (1992) NEO facets. Both theoretical perspectives-the bandwidth-fidelity dilemma and construct correspondence-suggest that lower order traits would better predict facets of job performance (task performance and contextual performance). They differ, however, as to the relative merits of broad and narrow traits in predicting a broad criterion (overall job performance). We first meta-analyzed the relationship of the 30 NEO facets to overall job performance and its facets. Overall, 1,176 correlations from 410 independent samples (combined N = 406,029) were coded and meta-analyzed. We then formed the 10 DeYoung et al. facets from the NEO facets, and 5 broad traits from those facets. Overall, results provided support for the 6-2-1 framework in general and the importance of the NEO facets in particular. (c) 2013 APA, all rights reserved.
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Hierarchical nonlinear behavior of hot composite structures
NASA Technical Reports Server (NTRS)
Murthy, P. L. N.; Chamis, C. C.; Singhal, S. N.
1993-01-01
Hierarchical computational procedures are described to simulate the multiple scale thermal/mechanical behavior of high temperature metal matrix composites (HT-MMC) in the following three broad areas: (1) behavior of HT-MMC's from micromechanics to laminate via METCAN (Metal Matrix Composite Analyzer), (2) tailoring of HT-MMC behavior for optimum specific performance via MMLT (Metal Matrix Laminate Tailoring), and (3) HT-MMC structural response for hot structural components via HITCAN (High Temperature Composite Analyzer). Representative results from each area are presented to illustrate the effectiveness of computational simulation procedures and accompanying computer codes. The sample case results show that METCAN can be used to simulate material behavior such as the entire creep span; MMLT can be used to concurrently tailor the fabrication process and the interphase layer for optimum performance such as minimum residual stresses; and HITCAN can be used to predict the structural behavior such as the deformed shape due to component fabrication. These codes constitute virtual portable desk-top test laboratories for characterizing HT-MMC laminates, tailoring the fabrication process, and qualifying structural components made from them.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
Janssen, Terry
2000-01-01
A system and method for facilitating decision-making comprising a computer program causing linkage of data representing a plurality of argument structure units into a hierarchical argument structure. Each argument structure unit comprises data corresponding to a hypothesis and its corresponding counter-hypothesis, data corresponding to grounds that provide a basis for inference of the hypothesis or its corresponding counter-hypothesis, data corresponding to a warrant linking the grounds to the hypothesis or its corresponding counter-hypothesis, and data corresponding to backing that certifies the warrant. The hierarchical argument structure comprises a top level argument structure unit and a plurality of subordinate level argument structure units. Each of the plurality of subordinate argument structure units comprises at least a portion of the grounds of the argument structure unit to which it is subordinate. Program code located on each of a plurality of remote computers accepts input from one of a plurality of contributors. Each input comprises data corresponding to an argument structure unit in the hierarchical argument structure and supports the hypothesis or its corresponding counter-hypothesis. A second programming code is adapted to combine the inputs into a single hierarchical argument structure. A third computer program code is responsive to the second computer program code and is adapted to represent a degree of support for the hypothesis and its corresponding counter-hypothesis in the single hierarchical argument structure.
Bulashevska, Alla; Eils, Roland
2006-06-14
The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.
ERIC Educational Resources Information Center
Ishitani, Terry T.
2010-01-01
This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…
Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan; Su, Cheng-Yong
2017-11-13
Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cerebral responses to local and global auditory novelty under general anesthesia
Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir
2017-01-01
Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.
Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina
2018-05-23
Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.
Hierarchical Ensemble Methods for Protein Function Prediction
2014-01-01
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954
Mozaffari, Brian
2014-01-01
Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL)-located deep in the hierarchy-serves as a bridge connecting supra- to infra-MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL "bridge" allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these "bridge" predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC). In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.
The inner formal structure of the H-T-P drawings: an exploratory study.
Vass, Z
1998-08-01
The study describes some interrelated patterns of traits of the House-Tree-Person (H-T-P) drawings with the instruments of hierarchical cluster analysis. First, according to the literature 1 7 formal or structural aspects of the projective drawings were collected, after which a detailed manual for coding was compiled. Second, the interrater reliability and the consistency of this manual was tested. Third, the hierarchical cluster structure of the reliable and consistent formal aspects was analysed. Results are: (a) a psychometrically tested coding manual of the investigated formal-structural aspects, each of them illustrated with drawings that showed the highest interrater agreement; and (b) the hierarchic cluster structure of the formal aspects of the H-T-P drawings of "normal" adults.
Anterior insula coordinates hierarchical processing of tactile mismatch responses
Allen, Micah; Fardo, Francesca; Dietz, Martin J.; Hillebrandt, Hauke; Friston, Karl J.; Rees, Geraint; Roepstorff, Andreas
2016-01-01
The body underlies our sense of self, emotion, and agency. Signals arising from the skin convey warmth, social touch, and the physical characteristics of external stimuli. Surprising or unexpected tactile sensations can herald events of motivational salience, including imminent threats (e.g., an insect bite) and hedonic rewards (e.g., a caressing touch). Awareness of such events is thought to depend upon the hierarchical integration of body-related mismatch responses by the anterior insula. To investigate this possibility, we measured brain activity using functional magnetic resonance imaging, while healthy participants performed a roving tactile oddball task. Mass-univariate analysis demonstrated robust activations in limbic, somatosensory, and prefrontal cortical areas previously implicated in tactile deviancy, body awareness, and cognitive control. Dynamic Causal Modelling revealed that unexpected stimuli increased the strength of forward connections along a caudal to rostral hierarchy—projecting from thalamic and somatosensory regions towards insula, cingulate and prefrontal cortices. Within this ascending flow of sensory information, the AIC was the only region to show increased backwards connectivity to the somatosensory cortex, augmenting a reciprocal exchange of neuronal signals. Further, participants who rated stimulus changes as easier to detect showed stronger modulation of descending PFC to AIC connections by deviance. These results suggest that the AIC coordinates hierarchical processing of tactile prediction error. They are interpreted in support of an embodied predictive coding model where AIC mediated body awareness is involved in anchoring a global neuronal workspace. PMID:26584870
Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.
Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio
2017-10-12
The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
Do Motivational Interviewing Behaviors Predict Reductions in Partner Aggression for Men and Women?
Woodin, Erica M.; Sotskova, Alina; O’Leary, K. Daniel
2011-01-01
Motivational interviewing is a directive, non-confrontational intervention to promote behavior change. The current study examined therapist behaviors during a successful brief motivational interviewing intervention for physically aggressive college dating couples (Woodin & O’Leary, 2010). Forty-five minute motivational interviews with each partner were videotaped and coded using the Motivational Interviewing Treatment Integrity scale (MITI; Moyers, Martin, Manuel, & Miller, 2003). Hierarchical modeling analyses demonstrated that therapist behaviors consistent with motivational interviewing competency predicted significantly greater reductions in physical aggression perpetration following the intervention. Specifically, greater reflection to question ratios by the therapists predicted reductions in aggression for both men and women, greater percentages of open versus closed questions predicted aggression reductions for women, and there was a trend for greater levels of global therapist empathy to predict aggression reductions for women. These findings provide evidence that motivational interviewing seems to have an effect on behavior change through therapist behaviors consistent with the theoretical underpinnings of motivational interviewing. PMID:22119133
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
Mozaffari, Brian
2014-01-01
Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL)—located deep in the hierarchy—serves as a bridge connecting supra- to infra—MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL “bridge” allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these “bridge” predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC). In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation. PMID:25426036
Neural evidence for predictive coding in auditory cortex during speech production.
Okada, Kayoko; Matchin, William; Hickok, Gregory
2018-02-01
Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.
Entraining IDyOT: Timing in the Information Dynamics of Thinking
Forth, Jamie; Agres, Kat; Purver, Matthew; Wiggins, Geraint A.
2016-01-01
We present a novel hypothetical account of entrainment in music and language, in context of the Information Dynamics of Thinking model, IDyOT. The extended model affords an alternative view of entrainment, and its companion term, pulse, from earlier accounts. The model is based on hierarchical, statistical prediction, modeling expectations of both what an event will be and when it will happen. As such, it constitutes a kind of predictive coding, with a particular novel hypothetical implementation. Here, we focus on the model's mechanism for predicting when a perceptual event will happen, given an existing sequence of past events, which may be musical or linguistic. We propose a range of tests to validate or falsify the model, at various different levels of abstraction, and argue that computational modeling in general, and this model in particular, can offer a means of providing limited but useful evidence for evolutionary hypotheses. PMID:27803682
A principle of economy predicts the functional architecture of grid cells.
Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay
2015-09-03
Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.
AptRank: an adaptive PageRank model for protein function prediction on bi-relational graphs.
Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael
2017-06-15
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Compiler-Directed File Layout Optimization for Hierarchical Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut
File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less
Compiler-Directed File Layout Optimization for Hierarchical Storage Systems
Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...
2013-01-01
File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less
In-plane crashworthiness of bio-inspired hierarchical honeycombs
Yin, Hanfeng; Huang, Xiaofei; Scarpa, Fabrizio; ...
2018-03-13
Biological tissues like bone, wood, and sponge possess hierarchical cellular topologies, which are lightweight and feature an excellent energy absorption capability. Here we present a system of bio-inspired hierarchical honeycomb structures based on hexagonal, Kagome, and triangular tessellations. The hierarchical designs and a reference regular honeycomb configuration are subjected to simulated in-plane impact using the nonlinear finite element code LS-DYNA. The numerical simulation results show that the triangular hierarchical honeycomb provides the best performance compared to the other two hierarchical honeycombs, and features more than twice the energy absorbed by the regular honeycomb under similar loading conditions. We also proposemore » a parametric study correlating the microstructure parameters (hierarchical length ratio r and the number of sub cells N) to the energy absorption capacity of these hierarchical honeycombs. The triangular hierarchical honeycomb with N = 2 and r = 1/8 shows the highest energy absorption capacity among all the investigated cases, and this configuration could be employed as a benchmark for the design of future safety protective systems.« less
In-plane crashworthiness of bio-inspired hierarchical honeycombs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Hanfeng; Huang, Xiaofei; Scarpa, Fabrizio
Biological tissues like bone, wood, and sponge possess hierarchical cellular topologies, which are lightweight and feature an excellent energy absorption capability. Here we present a system of bio-inspired hierarchical honeycomb structures based on hexagonal, Kagome, and triangular tessellations. The hierarchical designs and a reference regular honeycomb configuration are subjected to simulated in-plane impact using the nonlinear finite element code LS-DYNA. The numerical simulation results show that the triangular hierarchical honeycomb provides the best performance compared to the other two hierarchical honeycombs, and features more than twice the energy absorbed by the regular honeycomb under similar loading conditions. We also proposemore » a parametric study correlating the microstructure parameters (hierarchical length ratio r and the number of sub cells N) to the energy absorption capacity of these hierarchical honeycombs. The triangular hierarchical honeycomb with N = 2 and r = 1/8 shows the highest energy absorption capacity among all the investigated cases, and this configuration could be employed as a benchmark for the design of future safety protective systems.« less
Measuring the hierarchy of feedforward networks
NASA Astrophysics Data System (ADS)
Corominas-Murtra, Bernat; Rodríguez-Caso, Carlos; Goñi, Joaquín; Solé, Ricard
2011-03-01
In this paper we explore the concept of hierarchy as a quantifiable descriptor of ordered structures, departing from the definition of three conditions to be satisfied for a hierarchical structure: order, predictability, and pyramidal structure. According to these principles, we define a hierarchical index taking concepts from graph and information theory. This estimator allows to quantify the hierarchical character of any system susceptible to be abstracted in a feedforward causal graph, i.e., a directed acyclic graph defined in a single connected structure. Our hierarchical index is a balance between this predictability and pyramidal condition by the definition of two entropies: one attending the onward flow and the other for the backward reversion. We show how this index allows to identify hierarchical, antihierarchical, and nonhierarchical structures. Our formalism reveals that departing from the defined conditions for a hierarchical structure, feedforward trees and the inverted tree graphs emerge as the only causal structures of maximal hierarchical and antihierarchical systems respectively. Conversely, null values of the hierarchical index are attributed to a number of different configuration networks; from linear chains, due to their lack of pyramid structure, to full-connected feedforward graphs where the diversity of onward pathways is canceled by the uncertainty (lack of predictability) when going backward. Some illustrative examples are provided for the distinction among these three types of hierarchical causal graphs.
Jing, Xia; Cimino, James J.
2011-01-01
Objective: To explore new graphical methods for reducing and analyzing large data sets in which the data are coded with a hierarchical terminology. Methods: We use a hierarchical terminology to organize a data set and display it in a graph. We reduce the size and complexity of the data set by considering the terminological structure and the data set itself (using a variety of thresholds) as well as contributions of child level nodes to parent level nodes. Results: We found that our methods can reduce large data sets to manageable size and highlight the differences among graphs. The thresholds used as filters to reduce the data set can be used alone or in combination. We applied our methods to two data sets containing information about how nurses and physicians query online knowledge resources. The reduced graphs make the differences between the two groups readily apparent. Conclusions: This is a new approach to reduce size and complexity of large data sets and to simplify visualization. This approach can be applied to any data sets that are coded with hierarchical terminologies. PMID:22195119
Predictive Feedback Can Account for Biphasic Responses in the Lateral Geniculate Nucleus
Jehee, Janneke F. M.; Ballard, Dana H.
2009-01-01
Biphasic neural response properties, where the optimal stimulus for driving a neural response changes from one stimulus pattern to the opposite stimulus pattern over short periods of time, have been described in several visual areas, including lateral geniculate nucleus (LGN), primary visual cortex (V1), and middle temporal area (MT). We describe a hierarchical model of predictive coding and simulations that capture these temporal variations in neuronal response properties. We focus on the LGN-V1 circuit and find that after training on natural images the model exhibits the brain's LGN-V1 connectivity structure, in which the structure of V1 receptive fields is linked to the spatial alignment and properties of center-surround cells in the LGN. In addition, the spatio-temporal response profile of LGN model neurons is biphasic in structure, resembling the biphasic response structure of neurons in cat LGN. Moreover, the model displays a specific pattern of influence of feedback, where LGN receptive fields that are aligned over a simple cell receptive field zone of the same polarity decrease their responses while neurons of opposite polarity increase their responses with feedback. This phase-reversed pattern of influence was recently observed in neurophysiology. These results corroborate the idea that predictive feedback is a general coding strategy in the brain. PMID:19412529
Sparsey™: event recognition via deep hierarchical sparse distributed codes
Rinkus, Gerard J.
2014-01-01
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns. PMID:25566046
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Multicast Routing of Hierarchical Data
NASA Technical Reports Server (NTRS)
Shacham, Nachum
1992-01-01
The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.
2017-09-01
efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components
Neural underpinnings of music: the polyrhythmic brain.
Vuust, Peter; Gebauer, Line K; Witek, Maria A G
2014-01-01
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has the remarkable ability to move our minds and bodies. Why do certain rhythms make us want to tap our feet, bop our heads or even get up and dance? And how does the brain process rhythmically complex rhythms during our experiences of music? In this chapter, we describe some common forms of rhythmic complexity in music and propose that the theory of predictive coding can explain how rhythm and rhythmic complexity are processed in the brain. We also consider how this theory may reveal why we feel so compelled by rhythmic tension in music. First, musical-theoretical and neuroscientific frameworks of rhythm are presented, in which rhythm perception is conceptualized as an interaction between what is heard ('rhythm') and the brain's anticipatory structuring of music ('the meter'). Second, three different examples of tension between rhythm and meter in music are described: syncopation, polyrhythm and groove. Third, we present the theory of predictive coding of music, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain's Bayesian minimization of the error between the input to the brain and the brain's prior expectations. Fourth, empirical studies of neural and behavioral effects of syncopation, polyrhythm and groove will be reported, and we propose how these studies can be seen as special cases of the predictive coding theory. Finally, we argue that musical rhythm exploits the brain's general principles of anticipation and propose that pleasure from musical rhythm may be a result of such anticipatory mechanisms.
Hierarchical surface code for network quantum computing with modules of arbitrary size
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2016-10-01
The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491
An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
MARC Coding of DDC for Subject Retrieval.
ERIC Educational Resources Information Center
Wajenberg, Arnold S.
1983-01-01
Recommends an expansion of MARC codes for decimal class numbers to enhance automated subject retrieval. Five values for a second indicator and two new subfields are suggested for encoding hierarchical relationships among decimal class numbers. Additional subfields are suggested to enhance retrieval through analysis of synthesized numbers in…
A principle of economy predicts the functional architecture of grid cells
Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay
2015-01-01
Grid cells in the brain respond when an animal occupies a periodic lattice of ‘grid fields’ during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths. DOI: http://dx.doi.org/10.7554/eLife.08362.001 PMID:26335200
A hierarchical clustering methodology for the estimation of toxicity.
Martin, Todd M; Harten, Paul; Venkatapathy, Raghuraman; Das, Shashikala; Young, Douglas M
2008-01-01
ABSTRACT A quantitative structure-activity relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural similarity is defined in terms of 2-D physicochemical descriptors (such as connectivity and E-state indices). A genetic algorithm-based technique is used to generate statistically valid QSAR models for each cluster (using the pool of descriptors described above). The toxicity for a given query compound is estimated using the weighted average of the predictions from the closest cluster from each step in the hierarchical clustering assuming that the compound is within the domain of applicability of the cluster. The hierarchical clustering methodology was tested using a Tetrahymena pyriformis acute toxicity data set containing 644 chemicals in the training set and with two prediction sets containing 339 and 110 chemicals. The results from the hierarchical clustering methodology were compared to the results from several different QSAR methodologies.
The Discourse of Making Amends: A Grammar of Remedial Interchanges.
ERIC Educational Resources Information Center
Walton, Marsha D.
Narrative observations were made of remedial interchanges occurring among school children (K-4) in open classrooms. Transcripts of interchanges were typed move by move and coded according to a hierarchical coding scheme (remedy, defiance, no response, relief, ending, and ambiguous). The interchanges of the kindergarteners and first graders were…
Hierarchical models for informing general biomass equations with felled tree data
Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke
2015-01-01
We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...
Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael
2013-03-27
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.
Performance optimization of Qbox and WEST on Intel Knights Landing
NASA Astrophysics Data System (ADS)
Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois
We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.
Apodaca, Timothy R; Borsari, Brian; Jackson, Kristina M; Magill, Molly; Longabaugh, Richard; Mastroleo, Nadine R; Barnett, Nancy P
2014-09-01
Within-session client language that represents a movement toward behavior change (change talk) has been linked to better treatment outcomes in the literature on motivational interviewing (MI). There has been somewhat less study of the impact of client language against change (sustain talk) on outcomes following an MI session. This study examined the role of both client change talk and sustain talk, as well as therapist language, occurring during a brief motivational intervention (BMI) session with college students who had violated college alcohol policy (N = 92). Audiotapes of these sessions were coded using a therapy process coding system. A series of hierarchical regressions were used to examine the relationships among therapist MI-consistent and MI-inconsistent language, client change talk and sustain talk, as well as global measures of relational variables, and drinking outcomes. Contrary to prior research, sustain talk, but not change talk, predicted poorer alcohol use outcomes following the BMI at 3- and 12-month follow-up assessments. Higher levels of client self-exploration during the session also predicted improved drinking outcomes. Therapist measures of MI-consistent and MI-inconsistent language, and global measures of therapist acceptance and MI spirit were unrelated to client drinking outcomes. Results suggest that client sustain talk and self-exploration during the session play an important role in determining drinking outcomes among mandated college students receiving a BMI addressing alcohol use.
Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga
2018-05-01
The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.
NASA Astrophysics Data System (ADS)
Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.; Dominic, David F.; Freedman, Vicky L.; Scheibe, Timothy D.; Lunt, Ian A.
2010-04-01
A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the kilometer scale to the centimeter scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing of upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in part 1 of this paper. In part 2 (Guin et al., 2010), models generated by the code are presented and evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.
A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521
Stojanova, Daniela; Ceci, Michelangelo; Malerba, Donato; Dzeroski, Saso
2013-09-26
Ontologies and catalogs of gene functions, such as the Gene Ontology (GO) and MIPS-FUN, assume that functional classes are organized hierarchically, that is, general functions include more specific ones. This has recently motivated the development of several machine learning algorithms for gene function prediction that leverages on this hierarchical organization where instances may belong to multiple classes. In addition, it is possible to exploit relationships among examples, since it is plausible that related genes tend to share functional annotations. Although these relationships have been identified and extensively studied in the area of protein-protein interaction (PPI) networks, they have not received much attention in hierarchical and multi-class gene function prediction. Relations between genes introduce autocorrelation in functional annotations and violate the assumption that instances are independently and identically distributed (i.i.d.), which underlines most machine learning algorithms. Although the explicit consideration of these relations brings additional complexity to the learning process, we expect substantial benefits in predictive accuracy of learned classifiers. This article demonstrates the benefits (in terms of predictive accuracy) of considering autocorrelation in multi-class gene function prediction. We develop a tree-based algorithm for considering network autocorrelation in the setting of Hierarchical Multi-label Classification (HMC). We empirically evaluate the proposed algorithm, called NHMC (Network Hierarchical Multi-label Classification), on 12 yeast datasets using each of the MIPS-FUN and GO annotation schemes and exploiting 2 different PPI networks. The results clearly show that taking autocorrelation into account improves the predictive performance of the learned models for predicting gene function. Our newly developed method for HMC takes into account network information in the learning phase: When used for gene function prediction in the context of PPI networks, the explicit consideration of network autocorrelation increases the predictive performance of the learned models. Overall, we found that this holds for different gene features/ descriptions, functional annotation schemes, and PPI networks: Best results are achieved when the PPI network is dense and contains a large proportion of function-relevant interactions.
Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew
2013-01-01
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092
Materials with structural hierarchy
NASA Technical Reports Server (NTRS)
Lakes, Roderic
1993-01-01
The role of structural hierarchy in determining bulk material properties is examined. Dense hierarchical materials are discussed, including composites and polycrystals, polymers, and biological materials. Hierarchical cellular materials are considered, including cellular solids and the prediction of strength and stiffness in hierarchical cellular materials.
A Coding Scheme for Analysing Problem-Solving Processes of First-Year Engineering Students
ERIC Educational Resources Information Center
Grigg, Sarah J.; Benson, Lisa C.
2014-01-01
This study describes the development and structure of a coding scheme for analysing solutions to well-structured problems in terms of cognitive processes and problem-solving deficiencies for first-year engineering students. A task analysis approach was used to assess students' problem solutions using the hierarchical structure from a…
ERIC Educational Resources Information Center
Wyne, Mudasser F.
2010-01-01
It is hard to define a single set of ethics that will cover an entire computer users community. In this paper, the issue is addressed in reference to code of ethics implemented by various professionals, institutes and organizations. The paper presents a higher level model using hierarchical approach. The code developed using this approach could be…
Prediction of in vitro and in vivo oestrogen receptor activity using hierarchical clustering
In this study, hierarchical clustering classification models were developed to predict in vitro and in vivo oestrogen receptor (ER) activity. Classification models were developed for binding, agonist, and antagonist in vitro ER activity and for mouse in vivo uterotrophic ER bindi...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemens, Noel
This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LESmore » to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.« less
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Putwain, Dave; Deveney, Carolyn
2009-01-01
The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
Hierarchical modeling of molecular energies using a deep neural network
NASA Astrophysics Data System (ADS)
Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton
2018-06-01
We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.
Lopez, Michael J; Schuckers, Michael
2017-05-01
Roughly 14% of regular season National Hockey League games since the 2005-06 season have been decided by a shoot-out, and the resulting allocation of points has impacted play-off races each season. But despite interest from fans, players and league officials, there is little in the way of published research on team or individual shoot-out performance. This manuscript attempts to fill that void. We present both generalised linear mixed model and Bayesian hierarchical model frameworks to model shoot-out outcomes, with results suggesting that there are (i) small but statistically significant talent gaps between shooters, (ii) marginal differences in performance among netminders and (iii) few, if any, predictors of player success after accounting for individual talent. We also provide a resampling strategy to highlight a selection bias with respect to shooter assignment, in which coaches choose their most skilled offensive players early in shoot-out rounds and are less likely to select players with poor past performances. Finally, given that per-shot data for shoot-outs do not currently exist in a single location for public use, we provide both our data and source code for other researchers interested in studying shoot-out outcomes.
Development and evaluation of a new taxonomy of mobility-related assistive technology devices.
Shoemaker, Laura L; Lenker, James A; Fuhrer, Marcus J; Jutai, Jeffrey W; Demers, Louise; DeRuyter, Frank
2010-10-01
This article reports on the development of a new taxonomy for mobility-related assistive technology devices. A prototype taxonomy was created based on the extant literature. Five mobility device experts were engaged in a modified Delphi process to evaluate and refine the taxonomy. Multiple iterations of expert feedback and revision yielded consensual agreement on the structure and terminology of a new mobility device taxonomy. The taxonomy uses a hierarchical framework to classify ambulation aids and wheeled mobility devices, including their key features that impact mobility. Five attributes of the new taxonomy differentiate it from previous mobility-related device classifications: (1) hierarchical structure, (2) primary device categories are grouped based on their intended mobility impact, (3) comprehensive inclusion of technical features, (4) a capacity to assimilate reimbursement codes, and (5) availability of a detailed glossary. The taxonomy is intended to support assistive technology outcomes research. The taxonomy will enable researchers to capture mobility-related assistive technology device interventions with precision and provide a common terminology that will allow comparisons among studies. The prominence of technical features within the new taxonomy will hopefully promote research that helps clinicians predict how devices will perform, thus aiding clinical decision making and supporting funding recommendations.
Leveraging Hierarchical Population Structure in Discrete Association Studies
Carlson, Jonathan; Kadie, Carl; Mallal, Simon; Heckerman, David
2007-01-01
Population structure can confound the identification of correlations in biological data. Such confounding has been recognized in multiple biological disciplines, resulting in a disparate collection of proposed solutions. We examine several methods that correct for confounding on discrete data with hierarchical population structure and identify two distinct confounding processes, which we call coevolution and conditional influence. We describe these processes in terms of generative models and show that these generative models can be used to correct for the confounding effects. Finally, we apply the models to three applications: identification of escape mutations in HIV-1 in response to specific HLA-mediated immune pressure, prediction of coevolving residues in an HIV-1 peptide, and a search for genotypes that are associated with bacterial resistance traits in Arabidopsis thaliana. We show that coevolution is a better description of confounding in some applications and conditional influence is better in others. That is, we show that no single method is best for addressing all forms of confounding. Analysis tools based on these models are available on the internet as both web based applications and downloadable source code at http://atom.research.microsoft.com/bio/phylod.aspx. PMID:17611623
A hierarchical anatomical classification schema for prediction of phenotypic side effects
Kanji, Rakesh
2018-01-01
Prediction of adverse drug reactions is an important problem in drug discovery endeavors which can be addressed with data-driven strategies. SIDER is one of the most reliable and frequently used datasets for identification of key features as well as building machine learning models for side effects prediction. The inherently unbalanced nature of this data presents with a difficult multi-label multi-class problem towards prediction of drug side effects. We highlight the intrinsic issue with SIDER data and methodological flaws in relying on performance measures such as AUC while attempting to predict side effects.We argue for the use of metrics that are robust to class imbalance for evaluation of classifiers. Importantly, we present a ‘hierarchical anatomical classification schema’ which aggregates side effects into organs, sub-systems, and systems. With the help of a weighted performance measure, using 5-fold cross-validation we show that this strategy facilitates biologically meaningful side effects prediction at different levels of anatomical hierarchy. By implementing various machine learning classifiers we show that Random Forest model yields best classification accuracy at each level of coarse-graining. The manually curated, hierarchical schema for side effects can also serve as the basis of future studies towards prediction of adverse reactions and identification of key features linked to specific organ systems. Our study provides a strategy for hierarchical classification of side effects rooted in the anatomy and can pave the way for calibrated expert systems for multi-level prediction of side effects. PMID:29494708
A hierarchical anatomical classification schema for prediction of phenotypic side effects.
Wadhwa, Somin; Gupta, Aishwarya; Dokania, Shubham; Kanji, Rakesh; Bagler, Ganesh
2018-01-01
Prediction of adverse drug reactions is an important problem in drug discovery endeavors which can be addressed with data-driven strategies. SIDER is one of the most reliable and frequently used datasets for identification of key features as well as building machine learning models for side effects prediction. The inherently unbalanced nature of this data presents with a difficult multi-label multi-class problem towards prediction of drug side effects. We highlight the intrinsic issue with SIDER data and methodological flaws in relying on performance measures such as AUC while attempting to predict side effects.We argue for the use of metrics that are robust to class imbalance for evaluation of classifiers. Importantly, we present a 'hierarchical anatomical classification schema' which aggregates side effects into organs, sub-systems, and systems. With the help of a weighted performance measure, using 5-fold cross-validation we show that this strategy facilitates biologically meaningful side effects prediction at different levels of anatomical hierarchy. By implementing various machine learning classifiers we show that Random Forest model yields best classification accuracy at each level of coarse-graining. The manually curated, hierarchical schema for side effects can also serve as the basis of future studies towards prediction of adverse reactions and identification of key features linked to specific organ systems. Our study provides a strategy for hierarchical classification of side effects rooted in the anatomy and can pave the way for calibrated expert systems for multi-level prediction of side effects.
ERIC Educational Resources Information Center
Qasem, Mousa; Foote, Rebecca
2010-01-01
This study tested the predictions of the revised hierarchical (RHM) and morphological decomposition (MDM) models with Arabic-English bilinguals. The RHM (Kroll & Stewart, 1994) predicts that the amount of activation of first language translation equivalents is negatively correlated with second language (L2) proficiency. The MDM (Frost, Forster, &…
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
NASA Astrophysics Data System (ADS)
Itoh, Masato; Hagimori, Yuki; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
In this study, we apply a hierarchical model predictive control to omni-directional mobile vehicle, and improve the tracking performance. We deal with an independent four-wheel driving/steering vehicle (IFWDS) equipped with four coaxial steering mechanisms (CSM). The coaxial steering mechanism is a special one composed of two steering joints on the same axis. In our previous study with respect to IFWDS with ideal steering, we proposed a model predictive tracking control. However, this method did not consider constraints of the coaxial steering mechanism which causes delay of steering. We also proposed a model predictive steering control considering constraints of this mechanism. In this study, we propose a hierarchical system combining above two control methods for IFWDS. An upper controller, which deals with vehicle kinematics, runs a model predictive tracking control, and a lower controller, which considers constraints of coaxial steering mechanism, runs a model predictive steering control which tracks the predicted steering angle optimized an upper controller. We verify the superiority of this method by comparing this method with the previous method.
Khazraee, S Hadi; Johnson, Valen; Lord, Dominique
2018-08-01
The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients). Copyright © 2018. Published by Elsevier Ltd.
Friston, Karl J.; Mattingley, Jason B.; Roepstorff, Andreas; Garrido, Marta I.
2014-01-01
Detecting the location of salient sounds in the environment rests on the brain's ability to use differences in sounds arriving at both ears. Functional neuroimaging studies in humans indicate that the left and right auditory hemispaces are coded asymmetrically, with a rightward attentional bias that reflects spatial attention in vision. Neuropsychological observations in patients with spatial neglect have led to the formulation of two competing models: the orientation bias and right-hemisphere dominance models. The orientation bias model posits a symmetrical mapping between one side of the sensorium and the contralateral hemisphere, with mutual inhibition of the ipsilateral hemisphere. The right-hemisphere dominance model introduces a functional asymmetry in the brain's coding of space: the left hemisphere represents the right side, whereas the right hemisphere represents both sides of the sensorium. We used Dynamic Causal Modeling of effective connectivity and Bayesian model comparison to adjudicate between these alternative network architectures, based on human electroencephalographic data acquired during an auditory location oddball paradigm. Our results support a hemispheric asymmetry in a frontoparietal network that conforms to the right-hemisphere dominance model. We show that, within this frontoparietal network, forward connectivity increases selectively in the hemisphere contralateral to the side of sensory stimulation. We interpret this finding in light of hierarchical predictive coding as a selective increase in attentional gain, which is mediated by feedforward connections that carry precision-weighted prediction errors during perceptual inference. This finding supports the disconnection hypothesis of unilateral neglect and has implications for theories of its etiology. PMID:24695717
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
Hierarchical time series bottom-up approach for forecast the export value in Central Java
NASA Astrophysics Data System (ADS)
Mahkya, D. A.; Ulama, B. S.; Suhartono
2017-10-01
The purpose of this study is Getting the best modeling and predicting the export value of Central Java using a Hierarchical Time Series. The export value is one variable injection in the economy of a country, meaning that if the export value of the country increases, the country’s economy will increase even more. Therefore, it is necessary appropriate modeling to predict the export value especially in Central Java. Export Value in Central Java are grouped into 21 commodities with each commodity has a different pattern. One approach that can be used time series is a hierarchical approach. Hierarchical Time Series is used Buttom-up. To Forecast the individual series at all levels using Autoregressive Integrated Moving Average (ARIMA), Radial Basis Function Neural Network (RBFNN), and Hybrid ARIMA-RBFNN. For the selection of the best models used Symmetric Mean Absolute Percentage Error (sMAPE). Results of the analysis showed that for the Export Value of Central Java, Bottom-up approach with Hybrid ARIMA-RBFNN modeling can be used for long-term predictions. As for the short and medium-term predictions, it can be used a bottom-up approach RBFNN modeling. Overall bottom-up approach with RBFNN modeling give the best result.
Heil, Lieke; Kwisthout, Johan; van Pelt, Stan; van Rooij, Iris; Bekkering, Harold
2018-01-01
Evidence is accumulating that our brains process incoming information using top-down predictions. If lower level representations are correctly predicted by higher level representations, this enhances processing. However, if they are incorrectly predicted, additional processing is required at higher levels to "explain away" prediction errors. Here, we explored the potential nature of the models generating such predictions. More specifically, we investigated whether a predictive processing model with a hierarchical structure and causal relations between its levels is able to account for the processing of agent-caused events. In Experiment 1, participants watched animated movies of "experienced" and "novice" bowlers. The results are in line with the idea that prediction errors at a lower level of the hierarchy (i.e., the outcome of how many pins fell down) slow down reporting of information at a higher level (i.e., which agent was throwing the ball). Experiments 2 and 3 suggest that this effect is specific to situations in which the predictor is causally related to the outcome. Overall, the study supports the idea that a hierarchical predictive processing model can account for the processing of observed action outcomes and that the predictions involved are specific to cases where action outcomes can be predicted based on causal knowledge.
NASA Astrophysics Data System (ADS)
Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.
2018-02-01
The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamachi, Eiji; Yoshida, Takashi; Yamaguchi, Toshihiko
2014-10-06
We developed two-scale FE analysis procedure based on the crystallographic homogenization method by considering the hierarchical structure of poly-crystal aluminium alloy metal. It can be characterized as the combination of two-scale structure, such as the microscopic polycrystal structure and the macroscopic elastic plastic continuum. Micro polycrystal structure can be modeled as a three dimensional representative volume element (RVE). RVE is featured as by 3×3×3 eight-nodes solid finite elements, which has 216 crystal orientations. This FE analysis code can predict the deformation, strain and stress evolutions in the wire drawing processes in the macro- scales, and further the crystal texture andmore » hardening evolutions in the micro-scale. In this study, we analyzed the texture evolution in the wire drawing processes by our two-scale FE analysis code under conditions of various drawing angles of dice. We evaluates the texture evolution in the surface and center regions of the wire cross section, and to clarify the effects of processing conditions on the texture evolution.« less
NASA Astrophysics Data System (ADS)
Nakamachi, Eiji; Yoshida, Takashi; Kuramae, Hiroyuki; Morimoto, Hideo; Yamaguchi, Toshihiko; Morita, Yusuke
2014-10-01
We developed two-scale FE analysis procedure based on the crystallographic homogenization method by considering the hierarchical structure of poly-crystal aluminium alloy metal. It can be characterized as the combination of two-scale structure, such as the microscopic polycrystal structure and the macroscopic elastic plastic continuum. Micro polycrystal structure can be modeled as a three dimensional representative volume element (RVE). RVE is featured as by 3×3×3 eight-nodes solid finite elements, which has 216 crystal orientations. This FE analysis code can predict the deformation, strain and stress evolutions in the wire drawing processes in the macro- scales, and further the crystal texture and hardening evolutions in the micro-scale. In this study, we analyzed the texture evolution in the wire drawing processes by our two-scale FE analysis code under conditions of various drawing angles of dice. We evaluates the texture evolution in the surface and center regions of the wire cross section, and to clarify the effects of processing conditions on the texture evolution.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Shared neural coding for social hierarchy and reward value in primate amygdala.
Munuera, Jérôme; Rigotti, Mattia; Salzman, C Daniel
2018-03-01
The social brain hypothesis posits that dedicated neural systems process social information. In support of this, neurophysiological data have shown that some brain regions are specialized for representing faces. It remains unknown, however, whether distinct anatomical substrates also represent more complex social variables, such as the hierarchical rank of individuals within a social group. Here we show that the primate amygdala encodes the hierarchical rank of individuals in the same neuronal ensembles that encode the rewards associated with nonsocial stimuli. By contrast, orbitofrontal and anterior cingulate cortices lack strong representations of hierarchical rank while still representing reward values. These results challenge the conventional view that dedicated neural systems process social information. Instead, information about hierarchical rank-which contributes to the assessment of the social value of individuals within a group-is linked in the amygdala to representations of rewards associated with nonsocial stimuli.
Ladd, Benjamin O.; McCrady, Barbara S.
2016-01-01
The current study aimed to examine whether classification of couples in which one partner has an alcohol problem is similar to that reported in the general couples literature. Typologies of couples seeking Alcohol Behavioral Couple Therapy (ABCT) were developed via hierarchical cluster analysis using behavioral codes of couple interactions during their first ABCT session. Four couples types based on in-session behavior were established reliably, labeled Avoider, Validator, Hostile, and Ambivalent-Detached. These couple types resembled couples types found in previous research. Couple type was associated with baseline relationship satisfaction, but not alcohol use. Results suggest heterogeneity in couples with alcohol problems presenting to treatment; further study is needed to investigate the function of alcohol within these different types. PMID:25808432
Consistent prediction of GO protein localization.
Spetale, Flavio E; Arce, Debora; Krsticevic, Flavia; Bulacio, Pilar; Tapia, Elizabeth
2018-05-17
The GO-Cellular Component (GO-CC) ontology provides a controlled vocabulary for the consistent description of the subcellular compartments or macromolecular complexes where proteins may act. Current machine learning-based methods used for the automated GO-CC annotation of proteins suffer from the inconsistency of individual GO-CC term predictions. Here, we present FGGA-CC + , a class of hierarchical graph-based classifiers for the consistent GO-CC annotation of protein coding genes at the subcellular compartment or macromolecular complex levels. Aiming to boost the accuracy of GO-CC predictions, we make use of the protein localization knowledge in the GO-Biological Process (GO-BP) annotations to boost the accuracy of GO-CC prediction. As a result, FGGA-CC + classifiers are built from annotation data in both the GO-CC and GO-BP ontologies. Due to their graph-based design, FGGA-CC + classifiers are fully interpretable and their predictions amenable to expert analysis. Promising results on protein annotation data from five model organisms were obtained. Additionally, successful validation results in the annotation of a challenging subset of tandem duplicated genes in the tomato non-model organism were accomplished. Overall, these results suggest that FGGA-CC + classifiers can indeed be useful for satisfying the huge demand of GO-CC annotation arising from ubiquitous high throughout sequencing and proteomic projects.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Deep neural models for ICD-10 coding of death certificates and autopsy reports in free-text.
Duarte, Francisco; Martins, Bruno; Pinto, Cátia Sousa; Silva, Mário J
2018-04-01
We address the assignment of ICD-10 codes for causes of death by analyzing free-text descriptions in death certificates, together with the associated autopsy reports and clinical bulletins, from the Portuguese Ministry of Health. We leverage a deep neural network that combines word embeddings, recurrent units, and neural attention, for the generation of intermediate representations of the textual contents. The neural network also explores the hierarchical nature of the input data, by building representations from the sequences of words within individual fields, which are then combined according to the sequences of fields that compose the inputs. Moreover, we explore innovative mechanisms for initializing the weights of the final nodes of the network, leveraging co-occurrences between classes together with the hierarchical structure of ICD-10. Experimental results attest to the contribution of the different neural network components. Our best model achieves accuracy scores over 89%, 81%, and 76%, respectively for ICD-10 chapters, blocks, and full-codes. Through examples, we also show that our method can produce interpretable results, useful for public health surveillance. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Zhou, Bo; Konstorum, Anna; Duong, Thao; Tieu, Kinh H.; Wells, William M.; Brown, Gregory G.; Stern, Hal S.; Shahbaba, Babak
2013-01-01
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model…
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo
2018-05-10
Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-17
PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.
Software Library: A Reusable Software Issue.
1984-06-01
On reverse aide it neceeary aid Identify by block number) Software Library; Program Library; Reusability; Generator 20 ABSTRACT (Cmlnue on revere... Software Library. A particular example of the Software Library, the Program Library, is described as a prototype of a reusable library. A hierarchical... programming libraries are described. Finally, non code products in the Software Library are discussed. Accesson Fo NTIS R~jS DrrC TA Availability Codes 0
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism
NASA Astrophysics Data System (ADS)
Zender, C. S.; Wang, W.; Vicente, P.
2013-12-01
Big Data is an ugly name for the scientific opportunities and challenges created by the growing wealth of geoscience data. How to weave large, disparate datasets together to best reveal their underlying properties, to exploit their strengths and minimize their weaknesses, to continually aggregate more information than the world knew yesterday and less than we will learn tomorrow? Data analytics techniques (statistics, data mining, machine learning, etc.) can accelerate pattern recognition and discovery. However, often researchers must, prior to analysis, organize multiple related datasets into a coherent framework. Hierarchical organization permits entire dataset to be stored in nested groups that reflect their intrinsic relationships and similarities. Hierarchical data can be simpler and faster to analyze by coding operators to automatically parallelize processes over isomorphic storage units, i.e., groups. The newest generation of netCDF Operators (NCO) embody this hierarchical approach, while still supporting traditional analysis approaches. We will use NCO to demonstrate the trade-offs involved in processing a prototypical Big Data application (analysis of CMIP5 datasets) using hierarchical and traditional analysis approaches.
Hierarchical Management Information Systems: A Decentralized Approach for University Administration
ERIC Educational Resources Information Center
Wager, J. James
1977-01-01
A Hierarchical Management Information System (HMIS) provides decision-making as well as operational information to all groups of the institution in a timely and predictable manner. Its operational aspects and benefits are discussed. (Author/LBH)
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
Uliano-Silva, Marcela; Dondero, Francesco; Dan Otto, Thomas; Costa, Igor; Lima, Nicholas Costa Barroso; Americo, Juliana Alves; Mazzoni, Camila Junqueira; Prosdocimi, Francisco; Rebelo, Mauro de Freitas
2018-01-01
Abstract Background For more than 25 years, the golden mussel, Limnoperna fortunei, has aggressively invaded South American freshwaters, having travelled more than 5000 km upstream across 5 countries. Along the way, the golden mussel has outcompeted native species and economically harmed aquaculture, hydroelectric powers, and ship transit. We have sequenced the complete genome of the golden mussel to understand the molecular basis of its invasiveness and search for ways to control it. Findings We assembled the 1.6-Gb genome into 20 548 scaffolds with an N50 length of 312 Kb using a hybrid and hierarchical assembly strategy from short and long DNA reads and transcriptomes. A total of 60 717 coding genes were inferred from a customized transcriptome-trained AUGUSTUS run. We also compared predicted protein sets with those of complete molluscan genomes, revealing an exacerbation of protein-binding domains in L. fortunei. Conclusions We built one of the best bivalve genome assemblies available using a cost-effective approach using Illumina paired-end, mate-paired, and PacBio long reads. We expect that the continuous and careful annotation of L. fortunei’s genome will contribute to the investigation of bivalve genetics, evolution, and invasiveness, as well as to the development of biotechnological tools for aquatic pest control. PMID:29267857
Uliano-Silva, Marcela; Dondero, Francesco; Dan Otto, Thomas; Costa, Igor; Lima, Nicholas Costa Barroso; Americo, Juliana Alves; Mazzoni, Camila Junqueira; Prosdocimi, Francisco; Rebelo, Mauro de Freitas
2018-02-01
For more than 25 years, the golden mussel, Limnoperna fortunei, has aggressively invaded South American freshwaters, having travelled more than 5000 km upstream across 5 countries. Along the way, the golden mussel has outcompeted native species and economically harmed aquaculture, hydroelectric powers, and ship transit. We have sequenced the complete genome of the golden mussel to understand the molecular basis of its invasiveness and search for ways to control it. We assembled the 1.6-Gb genome into 20 548 scaffolds with an N50 length of 312 Kb using a hybrid and hierarchical assembly strategy from short and long DNA reads and transcriptomes. A total of 60 717 coding genes were inferred from a customized transcriptome-trained AUGUSTUS run. We also compared predicted protein sets with those of complete molluscan genomes, revealing an exacerbation of protein-binding domains in L. fortunei. We built one of the best bivalve genome assemblies available using a cost-effective approach using Illumina paired-end, mate-paired, and PacBio long reads. We expect that the continuous and careful annotation of L. fortunei's genome will contribute to the investigation of bivalve genetics, evolution, and invasiveness, as well as to the development of biotechnological tools for aquatic pest control.
Multilevel modelling: Beyond the basic applications.
Wright, Daniel B; London, Kamala
2009-05-01
Over the last 30 years statistical algorithms have been developed to analyse datasets that have a hierarchical/multilevel structure. Particularly within developmental and educational psychology these techniques have become common where the sample has an obvious hierarchical structure, like pupils nested within a classroom. We describe two areas beyond the basic applications of multilevel modelling that are important to psychology: modelling the covariance structure in longitudinal designs and using generalized linear multilevel modelling as an alternative to methods from signal detection theory (SDT). Detailed code for all analyses is described using packages for the freeware R.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
A Hierarchical Clustering Methodology for the Estimation of Toxicity
A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...
Prediction of Solvent Physical Properties using the Hierarchical Clustering Method
Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...
ERIC Educational Resources Information Center
Marcovitch, Stuart; Zelazo, Philip David
2006-01-01
Age-appropriate modifications of the A-not-B task were used to examine 2-year-olds' search behavior. Several theories predict that A-not-B errors will increase as a function of number of A trials. However, the hierarchical competing systems model (Marcovitch & Zelazo, 1999) predicts that although the ratio of perseverative to nonperseverative…
Abstract Linguistic Structure Correlates with Temporal Activity during Naturalistic Comprehension
Brennan, Jonathan R.; Stabler, Edward P.; Van Wagenen, Sarah E.; Luh, Wen-Ming; Hale, John T.
2016-01-01
Neurolinguistic accounts of sentence comprehension identify a network of relevant brain regions, but do not detail the information flowing through them. We investigate syntactic information. Does brain activity implicate a computation over hierarchical grammars or does it simply reflect linear order, as in a Markov chain? To address this question, we quantify the cognitive states implied by alternative parsing models. We compare processing-complexity predictions from these states against fMRI timecourses from regions that have been implicated in sentence comprehension. We find that hierarchical grammars independently predict timecourses from left anterior and posterior temporal lobe. Markov models are predictive in these regions and across a broader network that includes the inferior frontal gyrus. These results suggest that while linear effects are wide-spread across the language network, certain areas in the left temporal lobe deal with abstract, hierarchical syntactic representations. PMID:27208858
Neural Coding Mechanisms in Gustation.
1980-09-15
world is composed of four primary tastes ( sweet , sour, salty , and bitter), and that each of these is carried by a separate and private neural line, thus...ted sweet -sour- salty -bitter types. The mathematical method of analysis was hierarchical cluster analysis based on the responses of many neurons (20 to...block number) Taste Neural coding Neural organization Stimulus organization Olfaction AB TRACT M~ea -i .rvm~ .1* N necffas and idmatity by block mmnbwc
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Ladd, Benjamin O; McCrady, Barbara S
2016-01-01
This study aimed to examine whether classification of couples in which one partner has an alcohol problem is similar to that reported in the general couples literature. Typologies of couples seeking alcohol behavioral couple therapy (ABCT) were developed via hierarchical cluster analysis using behavioral codes of couple interactions during their first ABCT session. Four couples types based on in-session behavior were established reliably, labeled avoider, validator, hostile, and ambivalent-detached. These couple types resembled couples types found in previous research. Couple type was associated with baseline relationship satisfaction, but not alcohol use. Results suggest heterogeneity in couples with alcohol problems presenting to treatment; further study is needed to investigate the function of alcohol within these different types. © 2015 American Association for Marriage and Family Therapy.
Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.
2017-12-01
Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.
NASA Astrophysics Data System (ADS)
Bhardwaj, D. S. S.; Ghosh, Nilanjan; Rao, Nageswara; Pai, Ravi R.
2009-10-01
Runtime of the Mask Data Preparation (MDP) tool is largely dependent on the hierarchy of the input layout data. In this paper, we present a technique where a hierarchical or flat input design layout or almost flat mask data can be converted into a favorable hierarchical data which can be directly used by MDP tools for fracturing. A favorable hierarchy is a hierarchy of cells where polygons within cells do not overlap with each other even if bounding boxes of cells might overlap with each other. This is an important characteristic which can be intelligently made use of by intra-polygonal operations like fracturing. Otherwise, a mask data preparation (MDP) tool has to take the responsibility for resolving overlaps among polygons, which slows down the processing and increases the data size. MDP on a favorable hierarchy will thus speed up the fracturing or re-fracturing steps and also minimize the output fractured data size, as shown through the experimental results in the paper. In the proposed technique, the favorable hierarchy is generated using a modified version of the Lempel-Ziv (LZ) coding algorithm, which was originally devised for compressing character strings. A hierarchical fracturing algorithm can be employed to work on the favorable hierarchy generated, which will utilize the property of a favorable hierarchy that polygons do not overlap with each other. Apart from the obvious runtime benefits, such a favorable hierarchy allows considerable reduction in fractured data size as most mask data formats allow representation of a hierarchy containing two levels.
Hierarchical Parallelization of Gene Differential Association Analysis
2011-01-01
Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916
Hierarchical parallelization of gene differential association analysis.
Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing
2011-09-21
Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.
Trew, Jennifer L; Alden, Lynn E
2012-01-01
Models of self-regulation suggest that social goals may contribute to interpersonal and affective difficulties, yet little research has addressed this issue in the context of social anxiety. The present studies evaluated a hierarchical model of approach and avoidance in the context of social interaction anxiety, with affect as a mediating factor in the relationship between motivational tendencies and social goals. This model was refined in one undergraduate sample (N = 186) and cross-validated in a second sample (N = 195). The findings support hierarchical relationships between motivational tendencies, social interaction anxiety, affect, and social goals, with higher positive affect predicting fewer avoidance goals in both samples. Implications for the treatment of social interaction anxiety are discussed.
On the application of multilevel modeling in environmental and ecological studies
Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.
2010-01-01
This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.
A reliable method of manufacturing metallic hierarchical superhydrophobic surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogreb, Roman; Whyman, Gene; Barayev, Reuven
2009-06-01
A method of manufacturing hierarchical metallic surfaces demonstrating superhydrophobic properties is presented. The surfaces showed apparent contact angles as high as 153 deg. and sliding angles of 10 deg. for 50-100 {mu}l droplets. The Cassie-like model [A. B. D. Cassie and S. Baxter, Trans. Faraday Soc. 40, 546 (1944)], considering the hierarchical topography of the relief, predicts apparent contact angles in a satisfactory agreement with the measured values.
Alpha-Helical Protein Domains Unify Strength and Robustness through Hierarchical Nanostructures
2009-01-23
backbone atom (hydrogen donor) of peptide i + 4 in the polypeptide chain. Consequently, at each convolution , 3.5 H- bonds are found in a parallel...signaling and deformation behavior of cytoskeletal protein networks in cells (e.g. intermediate filaments vimentin and lamin as well as actin [7, 8... convolution . The Hierarchical Bell model enables one to predict the strength of different hierarchical bond arrangements as a function of the
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Wu, Huaping; Yang, Zhe; Cao, Binbin; Zhang, Zheng; Zhu, Kai; Wu, Bingbing; Jiang, Shaofei; Chai, Guozhong
2017-01-10
The wetting transition on submersed superhydrophobic surfaces with hierarchical structures and the influence of trapped air on superhydrophobic stability are predicted based on the thermodynamics and mechanical analyses. The dewetting transition on the hierarchically structured surfaces is investigated, and two necessary thermodynamic conditions and a mechanical balance condition for dewetting transition are proposed. The corresponding thermodynamic phase diagram of reversible transition and the critical reversed pressure well explain the experimental results reported previously. Our theory provides a useful guideline for precise controlling of breaking down and recovering of superhydrophobicity by designing superhydrophobic surfaces with hierarchical structures under water.
Genomics dataset on unclassified published organism (patent US 7547531).
Khan Shawan, Mohammad Mahfuz Ali; Hasan, Md Ashraful; Hossain, Md Mozammel; Hasan, Md Mahmudul; Parvin, Afroza; Akter, Salina; Uddin, Kazi Rasel; Banik, Subrata; Morshed, Mahbubul; Rahman, Md Nazibur; Rahman, S M Badier
2016-12-01
Nucleotide (DNA) sequence analysis provides important clues regarding the characteristics and taxonomic position of an organism. With the intention that, DNA sequence analysis is very crucial to learn about hierarchical classification of that particular organism. This dataset (patent US 7547531) is chosen to simplify all the complex raw data buried in undisclosed DNA sequences which help to open doors for new collaborations. In this data, a total of 48 unidentified DNA sequences from patent US 7547531 were selected and their complete sequences were retrieved from NCBI BioSample database. Quick response (QR) code of those DNA sequences was constructed by DNA BarID tool. QR code is useful for the identification and comparison of isolates with other organisms. AT/GC content of the DNA sequences was determined using ENDMEMO GC Content Calculator, which indicates their stability at different temperature. The highest GC content was observed in GP445188 (62.5%) which was followed by GP445198 (61.8%) and GP445189 (59.44%), while lowest was in GP445178 (24.39%). In addition, New England BioLabs (NEB) database was used to identify cleavage code indicating the 5, 3 and blunt end and enzyme code indicating the methylation site of the DNA sequences was also shown. These data will be helpful for the construction of the organisms' hierarchical classification, determination of their phylogenetic and taxonomic position and revelation of their molecular characteristics.
How hierarchical is language use?
Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.
2012-01-01
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Success through Inattention in School Administration and Elsewhere.
ERIC Educational Resources Information Center
Lindblom, Charles E.
1994-01-01
Examines two ways of achieving social coordination: unilateral/hierarchical controls and multilateral controls. Discusses advantages of using mutual adjustment as an alternative to central coordination. Mutual adjustment occurs variously through language creation, moral codes, biological self-selection, market systems, and politics. Although…
The problem of gestalt in neurobiology.
Sokolov, E N
1997-01-01
The question of gestalts is discussed within the framework of its neuronal mechanisms. Two basic hypotheses are considered: 1) that of gestalts as a result of the hierarchical organization of neurons (gnostic units), and 2) that of gestalts as a result of the synchronization of neurons of a given level. Analysis of published data led to the conclusion that gestalts result from vector coding in the hierarchical organization of neurons. High-frequency oscillations in the gamma range (40-200 Hz) are of endogenous origin, and their function is to reinforce the synaptic inputs to those neurons which are involved in the synthesis of a gestalt.
Hierarchical content-based image retrieval by dynamic indexing and guided search
NASA Astrophysics Data System (ADS)
You, Jane; Cheung, King H.; Liu, James; Guo, Linong
2003-12-01
This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.
Ecological Factors Predict Transition Readiness/Self-Management in Youth With Chronic Conditions.
Javalkar, Karina; Johnson, Meredith; Kshirsagar, Abhijit V; Ocegueda, Sofia; Detwiler, Randal K; Ferris, Maria
2016-01-01
Health care transition readiness or self-management among adolescents and young adults (AYA) with chronic conditions may be influenced by factors related to their surrounding environment. Study participants were AYA diagnosed with a chronic condition and evaluated at pediatric- and adult-focused subspecialty clinics at the University of North Carolina Hospital Systems. All participants were administered a provider-administered self-management/transition-readiness tool, the UNC TRxANSITION Scale. Geographic area and associated characteristics (ecological factors) were identified for each participant's ZIP code using the published U.S. Census data. The Level 1 model of the hierarchical linear regression used individual-level predictors of transition readiness/self-management. The Level 2 model incorporated the ecological factors. We enrolled 511 AYA with different chronic conditions aged 12-31 years with the following characteristics: mean age of 20± 4 years, 45% white, 42% black, and 54% female. Participants represented 214 ZIP codes in or around North Carolina, USA. The Level 1 model showed that age, gender, and race were significant predictors of transition readiness/self-management. On adding the ecological factors in the Level 2 model, race was no longer significant. Participants from a geographic area with a greater percentage of females (β = .114, p = .005) and a higher median income (β = .126, p = .002) had greater overall transition readiness. Ecological factors also predicted subdomains of transition readiness/self-management. In this cohort of adolescents and young adults with different chronic conditions, ecological disparities such as sex composition, median income, and language predict self-management/transition readiness. It is important to take ecological risk factors into consideration when preparing patients for health self-management or transition. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Hierarchical differences in population coding within auditory cortex.
Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L
2017-08-01
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding. Copyright © 2017 the American Physiological Society.
Improving aircraft conceptual design - A PHIGS interactive graphics interface for ACSYNT
NASA Technical Reports Server (NTRS)
Wampler, S. G.; Myklebust, A.; Jayaram, S.; Gelhausen, P.
1988-01-01
A CAD interface has been created for the 'ACSYNT' aircraft conceptual design code that permits the execution and control of the design process via interactive graphics menus. This CAD interface was coded entirely with the new three-dimensional graphics standard, the Programmer's Hierarchical Interactive Graphics System. The CAD/ACSYNT system is designed for use by state-of-the-art high-speed imaging work stations. Attention is given to the approaches employed in modeling, data storage, and rendering.
Hierarchical image coding with diamond-shaped sub-bands
NASA Technical Reports Server (NTRS)
Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken
1992-01-01
We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.
Morton, Lindsay M.; Linet, Martha S.; Clarke, Christina A.; Kadin, Marshall E.; Vajdic, Claire M.; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C.-H.; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R.; Weisenburger, Dennis D.
2010-01-01
After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and “in situ” lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications. PMID:20699439
Sato, Naoyuki; Yamaguchi, Yoko
2009-06-01
The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.
Turner, Jennifer J; Morton, Lindsay M; Linet, Martha S; Clarke, Christina A; Kadin, Marshall E; Vajdic, Claire M; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C-H; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R; Weisenburger, Dennis D
2010-11-18
After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and "in situ" lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications.
Xiong, Dapeng; Zeng, Jianyang; Gong, Haipeng
2017-09-01
Residue-residue contacts are of great value for protein structure prediction, since contact information, especially from those long-range residue pairs, can significantly reduce the complexity of conformational sampling for protein structure prediction in practice. Despite progresses in the past decade on protein targets with abundant homologous sequences, accurate contact prediction for proteins with limited sequence information is still far from satisfaction. Methodologies for these hard targets still need further improvement. We presented a computational program DeepConPred, which includes a pipeline of two novel deep-learning-based methods (DeepCCon and DeepRCon) as well as a contact refinement step, to improve the prediction of long-range residue contacts from primary sequences. When compared with previous prediction approaches, our framework employed an effective scheme to identify optimal and important features for contact prediction, and was only trained with coevolutionary information derived from a limited number of homologous sequences to ensure robustness and usefulness for hard targets. Independent tests showed that 59.33%/49.97%, 64.39%/54.01% and 70.00%/59.81% of the top L/5, top L/10 and top 5 predictions were correct for CASP10/CASP11 proteins, respectively. In general, our algorithm ranked as one of the best methods for CASP targets. All source data and codes are available at http://166.111.152.91/Downloads.html . hgong@tsinghua.edu.cn or zengjy321@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
NASA Astrophysics Data System (ADS)
Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.
2006-12-01
This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
NASA Astrophysics Data System (ADS)
Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.
2005-09-01
A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.
Daily Stressors in School-Age Children: A Multilevel Approach
ERIC Educational Resources Information Center
Escobar, Milagros; Alarcón, Rafael; Blanca, María J.; Fernández-Baena, F. Javier; Rosel, Jesús F.; Trianes, María Victoria
2013-01-01
This study uses hierarchical or multilevel modeling to identify variables that contribute to daily stressors in a population of schoolchildren. Four hierarchical levels with several predictive variables were considered: student (age, sex, social adaptation of the student, number of life events and chronic stressors experienced, and educational…
USDA-ARS?s Scientific Manuscript database
Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...
ERIC Educational Resources Information Center
Ferguson, Gail M.
2013-01-01
The current study tests a prediction of Relational Discrepancy Theory (RDT; i.e., emotional distress will not accompany discrepancies in hierarchical relationships) for family obligations discrepancies among adolescent-parent dyads in Jamaica, a moderately collectivistic and hierarchical society. Ninety-five dyads reported psychological adjustment…
De novo assembly of a haplotype-resolved human genome.
Cao, Hongzhi; Wu, Honglong; Luo, Ruibang; Huang, Shujia; Sun, Yuhui; Tong, Xin; Xie, Yinlong; Liu, Binghang; Yang, Hailong; Zheng, Hancheng; Li, Jian; Li, Bo; Wang, Yu; Yang, Fang; Sun, Peng; Liu, Siyang; Gao, Peng; Huang, Haodong; Sun, Jing; Chen, Dan; He, Guangzhu; Huang, Weihua; Huang, Zheng; Li, Yue; Tellier, Laurent C A M; Liu, Xiao; Feng, Qiang; Xu, Xun; Zhang, Xiuqing; Bolund, Lars; Krogh, Anders; Kristiansen, Karsten; Drmanac, Radoje; Drmanac, Snezana; Nielsen, Rasmus; Li, Songgang; Wang, Jian; Yang, Huanming; Li, Yingrui; Wong, Gane Ka-Shu; Wang, Jun
2015-06-01
The human genome is diploid, and knowledge of the variants on each chromosome is important for the interpretation of genomic information. Here we report the assembly of a haplotype-resolved diploid genome without using a reference genome. Our pipeline relies on fosmid pooling together with whole-genome shotgun strategies, based solely on next-generation sequencing and hierarchical assembly methods. We applied our sequencing method to the genome of an Asian individual and generated a 5.15-Gb assembled genome with a haplotype N50 of 484 kb. Our analysis identified previously undetected indels and 7.49 Mb of novel coding sequences that could not be aligned to the human reference genome, which include at least six predicted genes. This haplotype-resolved genome represents the most complete de novo human genome assembly to date. Application of our approach to identify individual haplotype differences should aid in translating genotypes to phenotypes for the development of personalized medicine.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
ACCURATE LOW-MASS STELLAR MODELS OF KOI-126
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feiden, Gregory A.; Chaboyer, Brian; Dotter, Aaron, E-mail: gregory.a.feiden@dartmouth.edu
2011-10-10
The recent discovery of an eclipsing hierarchical triple system with two low-mass stars in a close orbit (KOI-126) by Carter et al. appeared to reinforce the evidence that theoretical stellar evolution models are not able to reproduce the observational mass-radius relation for low-mass stars. We present a set of stellar models for the three stars in the KOI-126 system that show excellent agreement with the observed radii. This agreement appears to be due to the equation of state implemented by our code. A significant dispersion in the observed mass-radius relation for fully convective stars is demonstrated; indicative of the influencemore » of physics currently not incorporated in standard stellar evolution models. We also predict apsidal motion constants for the two M dwarf companions. These values should be observationally determined to within 1% by the end of the Kepler mission.« less
Hale, Courtney M; Tager-Flusberg, Helen
2005-05-01
This longitudinal study investigated the developmental trajectory of discourse skills and theory of mind in 57 children with autism. Children were tested at two time points spaced 1 year apart. Each year they provided a natural language sample while interacting with one parent, and were given standardized vocabulary measures and a developmentally sequenced battery of theory of mind tasks. The language samples were coded for conversational skills, specifically the child's use of topic-related contingent utterances. Children with autism made significant gains over 1 year in the ability to maintain a topic of discourse. Hierarchical regression analyses demonstrated that theory of mind skills contributed unique variance to individual differences in contingent discourse ability and vice versa, when measured concurrently; however, they did not predict longitudinal changes. The findings offer some empirical support for the hypothesis that theory of mind is linked to communicative competence in children with autism.
Molinaro, Nicola; Giannelli, Francesco; Caffarra, Sendy; Martin, Clara
2017-07-01
Language comprehension is largely supported by predictive mechanisms that account for the ease and speed with which communication unfolds. Both native and proficient non-native speakers can efficiently handle contextual cues to generate reliable linguistic expectations. However, the link between the variability of the linguistic background of the speaker and the hierarchical format of the representations predicted is still not clear. We here investigate whether native language exposure to typologically highly diverse languages (Spanish and Basque) affects the way early balanced bilingual speakers carry out language predictions. During Spanish sentence comprehension, participants developed predictions of words the form of which (noun ending) could be either diagnostic of grammatical gender values (transparent) or totally ambiguous (opaque). We measured electrophysiological prediction effects time-locked both to the target word and to its determiner, with the former being expected or unexpected. Event-related (N200-N400) and oscillatory activity in the low beta-band (15-17Hz) frequency channel showed that both Spanish and Basque natives optimally carry out lexical predictions independently of word transparency. Crucially, in contrast to Spanish natives, Basque natives displayed visual word form predictions for transparent words, in consistency with the relevance that noun endings (post-nominal suffixes) play in their native language. We conclude that early language exposure largely shapes prediction mechanisms, so that bilinguals reading in their second language rely on the distributional regularities that are highly relevant in their first language. More importantly, we show that individual linguistic experience hierarchically modulates the format of the predicted representation. Copyright © 2017 Elsevier B.V. All rights reserved.
Testing the hypothesis of hierarchical predictability in ecological restoration and succession.
Abella, Scott R; Schetter, Timothy A; Walters, Timothy L
2018-02-01
To advance predictive ecology, the hypothesis of hierarchical predictability proposes that community measures for which species are interchangeable (e.g., structure and species richness) are more predictable than measures for which species identity matters (e.g., community composition). Predictability is hypothesized to decrease for response measures in order of the following categories: structure, species richness, function, and species composition. We tested this hypothesis using a 14-year, oak savanna-prairie restoration experiment that removed non-native pine plantations at 24 sites in northwestern Ohio, USA. Based on 24 response measures, the data showed minimal support for the hypothesis, because response measures varied in predictability within categories. Half of response measures had over half their variability modeled using fixed (restoration treatment and year) and random plot effects, and these "predictable" measures occurred in all four categories. Pine basal area, environment (e.g., soil texture), and antecedent vegetation accounted for over half the variation in change within the first three post-restoration years for 77% of response measures. Change between the 3rd and 14th years was less predictable, but most restoration measures increased favorably via sites achieving them in unique ways. We propose that variation will not conform with the hypothesis of hierarchical predictability in ecosystems with vegetation dynamics driven by stochastic processes such as seed dispersal, or where vegetation structure and species richness are influenced by species composition. The ability to predict a community measure may be more driven by the number of combinations of casual factors affecting a measure than by the number of values it can have.
Predicting helix–helix interactions from residue contacts in membrane proteins
Lo, Allan; Chiu, Yi-Yuan; Rødland, Einar Andreas; Lyu, Ping-Chiang; Sung, Ting-Yi; Hsu, Wen-Lian
2009-01-01
Motivation: Helix–helix interactions play a critical role in the structure assembly, stability and function of membrane proteins. On the molecular level, the interactions are mediated by one or more residue contacts. Although previous studies focused on helix-packing patterns and sequence motifs, few of them developed methods specifically for contact prediction. Results: We present a new hierarchical framework for contact prediction, with an application in membrane proteins. The hierarchical scheme consists of two levels: in the first level, contact residues are predicted from the sequence and their pairing relationships are further predicted in the second level. Statistical analyses on contact propensities are combined with other sequence and structural information for training the support vector machine classifiers. Evaluated on 52 protein chains using leave-one-out cross validation (LOOCV) and an independent test set of 14 protein chains, the two-level approach consistently improves the conventional direct approach in prediction accuracy, with 80% reduction of input for prediction. Furthermore, the predicted contacts are then used to infer interactions between pairs of helices. When at least three predicted contacts are required for an inferred interaction, the accuracy, sensitivity and specificity are 56%, 40% and 89%, respectively. Our results demonstrate that a hierarchical framework can be applied to eliminate false positives (FP) while reducing computational complexity in predicting contacts. Together with the estimated contact propensities, this method can be used to gain insights into helix-packing in membrane proteins. Availability: http://bio-cluster.iis.sinica.edu.tw/TMhit/ Contact: tsung@iis.sinica.edu.tw Supplementary information:Supplementary data are available at Bioinformatics online. PMID:19244388
Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A
2010-08-20
The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods if information about the patient's diagnoses prior to the index hospitalization is available and used to code the risk adjusters. However, caution should be exercised in studies evaluating inpatient processes of care and where data on pre-index admission diagnoses are unavailable.
A continuous-time neural model for sequential action.
Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard
2014-11-05
Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Program for User-Friendly Management of Input and Output Data Sets
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard
2003-01-01
A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
Badre, David
2012-01-01
Growing evidence suggests that the prefrontal cortex (PFC) is organized hierarchically, with more anterior regions having increasingly abstract representations. How does this organization support hierarchical cognitive control and the rapid discovery of abstract action rules? We present computational models at different levels of description. A neural circuit model simulates interacting corticostriatal circuits organized hierarchically. In each circuit, the basal ganglia gate frontal actions, with some striatal units gating the inputs to PFC and others gating the outputs to influence response selection. Learning at all of these levels is accomplished via dopaminergic reward prediction error signals in each corticostriatal circuit. This functionality allows the system to exhibit conditional if–then hypothesis testing and to learn rapidly in environments with hierarchical structure. We also develop a hybrid Bayesian-reinforcement learning mixture of experts (MoE) model, which can estimate the most likely hypothesis state of individual participants based on their observed sequence of choices and rewards. This model yields accurate probabilistic estimates about which hypotheses are attended by manipulating attentional states in the generative neural model and recovering them with the MoE model. This 2-pronged modeling approach leads to multiple quantitative predictions that are tested with functional magnetic resonance imaging in the companion paper. PMID:21693490
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.
Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813
Sato, Takaaki; Kobayakawa, Reiko; Kobayakawa, Ko; Emura, Makoto; Itohara, Shigeyoshi; Kizumi, Miwako; Hamana, Hiroshi; Tsuboi, Akio; Hirono, Junzo
2015-09-11
Enantiomeric pairs of mirror-image molecular structures are difficult to resolve by instrumental analyses. The human olfactory system, however, discriminates (-)-wine lactone from its (+)-form rapidly within seconds. To gain insight into receptor coding of enantiomers, we compared behavioural detection and discrimination thresholds of wild-type mice with those of ΔD mice in which all dorsal olfactory receptors are genetically ablated. Surprisingly, wild-type mice displayed an exquisite "supersensitivity" to enantiomeric pairs of wine lactones and carvones. They were capable of supersensitive discrimination of enantiomers, consistent with their high detection sensitivity. In contrast, ΔD mice showed selective major loss of sensitivity to the (+)-enantiomers. The resulting 10(8)-fold differential sensitivity of ΔD mice to (-)- vs. (+)-wine lactone matched that observed in humans. This suggests that humans lack highly sensitive orthologous dorsal receptors for the (+)-enantiomer, similarly to ΔD mice. Moreover, ΔD mice showed >10(10)-fold reductions in enantiomer discrimination sensitivity compared to wild-type mice. ΔD mice detected one or both of the (-)- and (+)-enantiomers over a wide concentration range, but were unable to discriminate them. This "enantiomer odour discrimination paradox" indicates that the most sensitive dorsal receptors play a critical role in hierarchical odour coding for enantiomer identification.
Sato, Takaaki; Kobayakawa, Reiko; Kobayakawa, Ko; Emura, Makoto; Itohara, Shigeyoshi; Kizumi, Miwako; Hamana, Hiroshi; Tsuboi, Akio; Hirono, Junzo
2015-01-01
Enantiomeric pairs of mirror-image molecular structures are difficult to resolve by instrumental analyses. The human olfactory system, however, discriminates (−)-wine lactone from its (+)-form rapidly within seconds. To gain insight into receptor coding of enantiomers, we compared behavioural detection and discrimination thresholds of wild-type mice with those of ΔD mice in which all dorsal olfactory receptors are genetically ablated. Surprisingly, wild-type mice displayed an exquisite “supersensitivity” to enantiomeric pairs of wine lactones and carvones. They were capable of supersensitive discrimination of enantiomers, consistent with their high detection sensitivity. In contrast, ΔD mice showed selective major loss of sensitivity to the (+)-enantiomers. The resulting 108-fold differential sensitivity of ΔD mice to (−)- vs. (+)-wine lactone matched that observed in humans. This suggests that humans lack highly sensitive orthologous dorsal receptors for the (+)-enantiomer, similarly to ΔD mice. Moreover, ΔD mice showed >1010-fold reductions in enantiomer discrimination sensitivity compared to wild-type mice. ΔD mice detected one or both of the (−)- and (+)-enantiomers over a wide concentration range, but were unable to discriminate them. This “enantiomer odour discrimination paradox” indicates that the most sensitive dorsal receptors play a critical role in hierarchical odour coding for enantiomer identification. PMID:26361056
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
ERIC Educational Resources Information Center
Strayhorn, Terrell Lamont
2008-01-01
The present study estimated the influence of academic and social collegiate experiences on Latino students' sense of belonging, controlling for background differences, using hierarchical analysis techniques with a nested design. In addition, results were compared between Latino students and their White counterparts. Findings reveal that grades,…
ERIC Educational Resources Information Center
Kapatsinski, Vsevolod
2009-01-01
This article proposes and tests an experimental method to assess the psychological reality of hierarchical theories of constituent structure in particular domains. I show that a hierarchical theory of constituent structure necessarily makes the prediction that an association between constituents should be easier to learn than an association…
Weber, Lilian; Diaconescu, Andreea; Tomiello, Sara; Schöbi, Dario; Iglesias, Sandra; Mathys, Christoph; Haker, Helene; Stefanics, Gabor; Schmidt, André; Kometer, Michael; Vollenweider, Franz X; Stephan, Klaas Enno
2018-01-01
Abstract Background A central theme of contemporary neuroscience is the notion that the brain embodies a generative model of its sensory inputs to infer on the underlying environmental causes, and that it uses hierarchical prediction errors (PEs) to continuously update this model. In two pharmacological EEG studies, we investigate trial-wise hierarchical PEs during the auditory mismatch negativity (MMN), an electrophysiological response to unexpected events, which depends on NMDA-receptor mediated plasticity and has repeatedly been shown to be reduced in schizophrenia. Methods Study1: Reanalysis of 64 channel EEG data from a previously published MMN study (Schmidt et al., 2012) using a placebo-controlled, within-subject design (N=19) to examine the effect of S-ketamine. Study2: 64 channel EEG data recorded during MMN (between subjects, double-blind, placebo-controlled design, N=73), to examine the effects of amisulpride and biperiden. Using the Hierarchical Gaussian Filter, a Bayesian learning model, we extracted trial-by-trial PE estimates on two hierarchical levels. These served as regressors in a GLM of trial-wise EEG signals at the sensor level. Results We find strong correlations of EEG with both PEs in both samples: lower-level PEs show effects early on (Study1: 133ms post-stimulus, Study2: 177ms), higher-level PEs later (Study1: 240ms, Study2: 450ms). The temporal order of these signatures thus mimics the hierarchical relationship of the PEs, as proposed by our computational model, where lower level beliefs need to be updated before learning can ensue on higher levels. Ketamine significantly reduced the representation of the higher-level PE in Study1. (Study2 has not been unblinded.) Discussion These studies present first evidence for hierarchical PEs during MMN and demonstrate that single-trial analyses guided by a computational model can distinguish different types (levels) of PEs, which are differentially linked to neuromodulators of demonstrated relevance for schizophrenia. Our analysis approach thus provides better mechanistic interpretability of pharmacological MMN studies, which will hopefully support the development of computational assays for diagnosis and treatment predictions in schizophrenia.
Word Order and Voice Influence the Timing of Verb Planning in German Sentence Production.
Sauppe, Sebastian
2017-01-01
Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early.
Computational Infrastructure for Engine Structural Performance Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1997-01-01
Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.
Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim
2005-01-01
With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.
Honkela, Antti; Valpola, Harri
2004-07-01
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.
A new conformal absorbing boundary condition for finite element meshes and parallelization of FEMATS
NASA Technical Reports Server (NTRS)
Chatterjee, A.; Volakis, J. L.; Nguyen, J.; Nurnberger, M.; Ross, D.
1993-01-01
Some of the progress toward the development and parallelization of an improved version of the finite element code FEMATS is described. This is a finite element code for computing the scattering by arbitrarily shaped three dimensional surfaces composite scatterers. The following tasks were worked on during the report period: (1) new absorbing boundary conditions (ABC's) for truncating the finite element mesh; (2) mixed mesh termination schemes; (3) hierarchical elements and multigridding; (4) parallelization; and (5) various modeling enhancements (antenna feeds, anisotropy, and higher order GIBC).
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
ERIC Educational Resources Information Center
Booth, Chase R.; Brown, Hannah L.; Eason, Elizabeth G.; Wallot, Sebastian; Kelty-Stephen, Damian G.
2018-01-01
Reader expectations form across hierarchical scales of discourse (e.g., from coarse to fine: genre, narrative, syntax). Cross-scale interactivity produces word reading times (RTs) with multifractal structure. After introducing multifractals, we test two hypotheses regarding their relevance to reader expectations: (1) multifractal evidence of…
Translation of Genotype to Phenotype by a Hierarchy of Cell Subsystems.
Yu, Michael Ku; Kramer, Michael; Dutkowski, Janusz; Srivas, Rohith; Licon, Katherine; Kreisberg, Jason; Ng, Cherie T; Krogan, Nevan; Sharan, Roded; Ideker, Trey
2016-02-24
Accurately translating genotype to phenotype requires accounting for the functional impact of genetic variation at many biological scales. Here we present a strategy for genotype-phenotype reasoning based on existing knowledge of cellular subsystems. These subsystems and their hierarchical organization are defined by the Gene Ontology or a complementary ontology inferred directly from previously published datasets. Guided by the ontology's hierarchical structure, we organize genotype data into an "ontotype," that is, a hierarchy of perturbations representing the effects of genetic variation at multiple cellular scales. The ontotype is then interpreted using logical rules generated by machine learning to predict phenotype. This approach substantially outperforms previous, non-hierarchical methods for translating yeast genotype to cell growth phenotype, and it accurately predicts the growth outcomes of two new screens of 2,503 double gene knockouts impacting DNA repair or nuclear lumen. Ontotypes also generalize to larger knockout combinations, setting the stage for interpreting the complex genetics of disease.
A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use
Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien
2016-01-01
Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553
Collins, Susan E; Carey, Kate B; Smyth, Joshua
2005-07-01
This study was a post hoc analysis of linguistic and motivation variables found in writing samples following the administration of two mailed brief interventions. At-risk college drinkers (N = 100) received personalized normative feedback (PNF) or an alcohol education (AE) brochure via mail. Participants responded to open-ended questions describing their reactions to the information they received. The writing samples were then coded for linguistic characteristics using the Linguistic Inquiry and Word Count program and for proportions of self-motivational statements using a modified version of the Motivational Interviewing Skills Code. Group comparisons indicated that the PNF group used a significantly higher percentage of first-person-singular and school-related words, whereas the AE group used a higher percentage of discrepancy, second-person and body-related words. Furthermore, the PNF group produced more language consistent with motivation to change than did the AE group. Hierarchical regressions testing mediation and moderation indicated that linguistic references to school and motivation moderated the group effect on changes in consumption during the heaviest drinking week. Further, although the group predicted reduction in heavy, episodic drinking, its effect was completely mediated by linguistic variables. Findings confirmed that PNF elicits distinct verbal responses that are associated with increased motivation and behavior change.
NASA Astrophysics Data System (ADS)
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
Complexity of major UK companies between 2006 and 2010: Hierarchical structure method approach
NASA Astrophysics Data System (ADS)
Ulusoy, Tolga; Keskin, Mustafa; Shirvani, Ayoub; Deviren, Bayram; Kantar, Ersin; Çaǧrı Dönmez, Cem
2012-11-01
This study reports on topology of the top 40 UK companies that have been analysed for predictive verification of markets for the period 2006-2010, applying the concept of minimal spanning tree and hierarchical tree (HT) analysis. Construction of the minimal spanning tree (MST) and the hierarchical tree (HT) is confined to a brief description of the methodology and a definition of the correlation function between a pair of companies based on the London Stock Exchange (LSE) index in order to quantify synchronization between the companies. A derivation of hierarchical organization and the construction of minimal-spanning and hierarchical trees for the 2006-2008 and 2008-2010 periods have been used and the results validate the predictive verification of applied semantics. The trees are known as useful tools to perceive and detect the global structure, taxonomy and hierarchy in financial data. From these trees, two different clusters of companies in 2006 were detected. They also show three clusters in 2008 and two between 2008 and 2010, according to their proximity. The clusters match each other as regards their common production activities or their strong interrelationship. The key companies are generally given by major economic activities as expected. This work gives a comparative approach between MST and HT methods from statistical physics and information theory with analysis of financial markets that may give new valuable and useful information of the financial market dynamics.
Subsumption principles underlying medical concept systems and their formal reconstruction.
Bernauer, J.
1994-01-01
Conventional medical concept systems represent generic concept relations by hierarchical coding principles. Often, these coding principles constrain the concept system and reduce the potential for automatical derivation of subsumption. Formal reconstruction of medical concept systems is an approach that bases on the conceptual representation of meanings and that allows for the application of formal criteria for subsumption. Those criteria must reflect intuitive principles of subordination which are underlying conventional medical concept systems. Particularly these are: The subordinate concept results (1) from adding a specializing criterion to the superordinate concept, (2) from refining the primary category, or a criterion of the superordinate concept, by a concept that is less general, (3) from adding a partitive criterion to a criterion of the superordinate, (4) from refining a criterion by a concept that is less comprehensive, and finally (5) from coordinating the superordinate concept, or one of its criteria. This paper introduces a formalism called BERNWARD that aims at the formal reconstruction of medical concept systems according to these intuitive principles. The automatical derivation of hierarchical relations is primarily supported by explicit generic and explicit partititive hierarchies of concepts, secondly, by two formal criteria that base on the structure of concept descriptions and explicit hierarchical relations between their elements, namely: formal subsumption and part-sensitive subsumption. Formal subsumption takes only generic relations into account, part-sensitive subsumption additionally regards partive relations between criteria. This approach seems to be flexible enough to cope with unforeseeable effects of partitive criteria on subsumption. PMID:7949907
Ecological subregion codes by county, coterminous United States
Victor A. Rudis
1999-01-01
This publication presents the National Hierarchical Framework of Ecological Units (ECOMAP 1993) by county for the coterminous United States. Assignment of the framework to individual counties is based on the predominant area by province and section to facilitate integration of county-referenced information with areas of uniform ecological potential. Included are maps...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erez, Mattan; Yelick, Katherine; Sarkar, Vivek
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less
Bedoya, David; Manolakos, Elias S; Novotny, Vladimir
2011-03-01
Indices of Biological integrity (IBI) are considered valid indicators of the overall health of a water body because the biological community is an endpoint within natural systems. However, prediction of biological integrity using information from multi-parameter environmental observations is a challenging problem due to the hierarchical organization of the natural environment, the existence of nonlinear inter-dependencies among variables as well as natural stochasticity and measurement noise. We present a method for predicting the Fish Index of Biological Integrity (IBI) using multiple environmental observations at the state-scale in Ohio. Instream (chemical and physical quality) and offstream parameters (regional and local upstream land uses, stream fragmentation, and point source density and intensity) are used for this purpose. The IBI predictions are obtained using the environmental site-similarity concept and following a simple to implement leave-one-out cross validation approach. An IBI prediction for a sampling site is calculated by averaging the observed IBI scores of observations clustered in the most similar branch of a dendrogram--a hierarchical clustering tree of environmental observations--built using the rest of the observations. The standardized Euclidean distance is used to assess dissimilarity between observations. The constructed predictive model was able to explain 61% of the IBI variability statewide. Stream fragmentation and regional land use explained 60% of the variability; the remaining 1% was explained by instream habitat quality. Metrics related to local land use, water quality, and point source density and intensity did not improve the predictive model at the state-scale. The impact of local environmental conditions was evaluated by comparing local characteristics between well- and mispredicted sites. Significant differences in local land use patterns and upstream fragmentation density explained some of the model's over-predictions. Local land use conditions explained some of the model's IBI under-predictions at the state-scale since none of the variables within this group were included in the best final predictive model. Under-predicted sites also had higher levels of downstream fragmentation. The proposed variables ranking and predictive modeling methodology is very well suited for the analysis of hierarchical environments, such as natural fresh water systems, with many cross-correlated environmental variables. It is computationally efficient, can be fully automated, does not make any pre-conceived assumptions on the variables interdependency structure (such as linearity), and it is able to rank variables in a database and generate IBI predictions using only non-parametric easy to implement hierarchical clustering. Copyright © 2011 Elsevier Ltd. All rights reserved.
Toward a Unified Sub-symbolic Computational Theory of Cognition
Butz, Martin V.
2016-01-01
This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a potential unification, it is discussed how abstract cognitive, conceptualized knowledge and understanding may be learned from actively gathered sensorimotor experiences. The unification rests on the free energy-based inference principle, which essentially implies that the brain builds a predictive, generative model of its environment. Neural activity-oriented inference causes the continuous adaptation of the currently active predictive encodings. Neural structure-oriented inference causes the longer term adaptation of the developing generative model as a whole. Finally, active inference strives for maintaining internal homeostasis, causing goal-directed motor behavior. To learn abstract, hierarchical encodings, however, it is proposed that free energy-based inference needs to be enhanced with structural priors, which bias cognitive development toward the formation of particular, behaviorally suitable encoding structures. As a result, it is hypothesized how abstract concepts can develop from, and thus how they are structured by and grounded in, sensorimotor experiences. Moreover, it is sketched-out how symbol-like thought can be generated by a temporarily active set of predictive encodings, which constitute a distributed neural attractor in the form of an interactive free-energy minimum. The activated, interactive network attractor essentially characterizes the semantics of a concept or a concept composition, such as an actual or imagined situation in our environment. Temporal successions of attractors then encode unfolding semantics, which may be generated by a behavioral or mental interaction with an actual or imagined situation in our environment. Implications, further predictions, possible verification, and falsifications, as well as potential enhancements into a fully spelled-out unified theory of cognition are discussed at the end of the paper. PMID:27445895
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
With or without you: predictive coding and Bayesian inference in the brain
Aitchison, Laurence; Lengyel, Máté
2018-01-01
Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084
ERIC Educational Resources Information Center
Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.
2002-01-01
Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…
Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.
2013-12-01
Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.
A review of predictive coding algorithms.
Spratling, M W
2017-03-01
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Yong-Cui; Wang, Yong; Yang, Zhi-Xia; Deng, Nai-Yang
2011-06-20
Enzymes are known as the largest class of proteins and their functions are usually annotated by the Enzyme Commission (EC), which uses a hierarchy structure, i.e., four numbers separated by periods, to classify the function of enzymes. Automatically categorizing enzyme into the EC hierarchy is crucial to understand its specific molecular mechanism. In this paper, we introduce two key improvements in predicting enzyme function within the machine learning framework. One is to introduce the efficient sequence encoding methods for representing given proteins. The second one is to develop a structure-based prediction method with low computational complexity. In particular, we propose to use the conjoint triad feature (CTF) to represent the given protein sequences by considering not only the composition of amino acids but also the neighbor relationships in the sequence. Then we develop a support vector machine (SVM)-based method, named as SVMHL (SVM for hierarchy labels), to output enzyme function by fully considering the hierarchical structure of EC. The experimental results show that our SVMHL with the CTF outperforms SVMHL with the amino acid composition (AAC) feature both in predictive accuracy and Matthew's correlation coefficient (MCC). In addition, SVMHL with the CTF obtains the accuracy and MCC ranging from 81% to 98% and 0.82 to 0.98 when predicting the first three EC digits on a low-homologous enzyme dataset. We further demonstrate that our method outperforms the methods which do not take account of hierarchical relationship among enzyme categories and alternative methods which incorporate prior knowledge about inter-class relationships. Our structure-based prediction model, SVMHL with the CTF, reduces the computational complexity and outperforms the alternative approaches in enzyme function prediction. Therefore our new method will be a useful tool for enzyme function prediction community.
The AGORA High-resolution Galaxy Simulations Comparison Project
NASA Astrophysics Data System (ADS)
Kim, Ji-hoon; Abel, Tom; Agertz, Oscar; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte; Conroy, Charlie; Dekel, Avishai; Gnedin, Nickolay Y.; Goldbaum, Nathan J.; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Hummels, Cameron B.; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly; Kravtsov, Andrey V.; Krumholz, Mark R.; Kuhlen, Michael; Leitner, Samuel N.; Madau, Piero; Mayer, Lucio; Moody, Christopher E.; Nagamine, Kentaro; Norman, Michael L.; Onorbe, Jose; O'Shea, Brian W.; Pillepich, Annalisa; Primack, Joel R.; Quinn, Thomas; Read, Justin I.; Robertson, Brant E.; Rocha, Miguel; Rudd, Douglas H.; Shen, Sijing; Smith, Britton D.; Szalay, Alexander S.; Teyssier, Romain; Thompson, Robert; Todoroki, Keita; Turk, Matthew J.; Wadsley, James W.; Wise, John H.; Zolotov, Adi; AGORA Collaboration29,the
2014-01-01
We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ~100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of eight galaxies with halo masses M vir ~= 1010, 1011, 1012, and 1013 M ⊙ at z = 0 and two different ("violent" and "quiescent") assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback prescriptions will be tuned to provide a realistic interstellar and circumgalactic medium using a non-cosmological disk galaxy simulation. Cosmological runs will be systematically compared with each other using a common analysis toolkit and validated against observations to verify that the solutions are robust—i.e., that the astrophysical assumptions are responsible for any success, rather than artifacts of particular implementations. The goals of the AGORA project are, broadly speaking, to raise the realism and predictive power of galaxy simulations and the understanding of the feedback processes that regulate galaxy "metabolism." The initial conditions for the AGORA galaxies as well as simulation outputs at various epochs will be made publicly available to the community. The proof-of-concept dark-matter-only test of the formation of a galactic halo with a z = 0 mass of M vir ~= 1.7 × 1011 M ⊙ by nine different versions of the participating codes is also presented to validate the infrastructure of the project.
Interactive metagenomic visualization in a Web browser.
Ondov, Brian D; Bergman, Nicholas H; Phillippy, Adam M
2011-09-30
A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net.
Dewenter, Heike; Heitmann, Kai U; Treinat, Lars; Thun, Sylvia
2014-01-01
According to German legal specifications each national federal state is obliged to transmit infection prevention data to the relevant health authority. In case of reasonable suspicion, affection or death by infectious diseases specific information is differently communicated by laboratories and physicians. Proprietary ways of transmission inherit threats like deficient or incomplete availability of data. At least these circumstances imply non-predictable health-related hazards for the population. The international established medical terminology SNOMED CT can contribute semantic interoperability and a highly specific description of diagnoses and procedures. The applicability of SNOMED CT shall be tested in the domain of diagnostic findings respective notifiable infectious agents. In addition, specific hierarchical links from the agents to the associated infectious diseases inside the terminology are expected and verified. As the carrier of the information, HL7's Clinical Document Architecture (CDA) is used by designing appropriate CDA templates to define the contents of the notifiable disease documentation. The results demonstrate that the entirety of the notifiable infectious agents is displayed in the terminology SNOMED CT by relating codes at 100 percent. Furthermore, each single term is hierarchically connected to the relating infectious diseases. The use of SNOMED CT for the purpose of infection prevention in Germany is tied to licensing and license costs. Irrespective of these facts, the use of SNOMED CT shows obvious advantages in this field and an implementation of the terminology can be recommended.
Metabolic pathways for the whole community.
Hanson, Niels W; Konwar, Kishori M; Hawley, Alyse K; Altman, Tomer; Karp, Peter D; Hallam, Steven J
2014-07-22
A convergence of high-throughput sequencing and computational power is transforming biology into information science. Despite these technological advances, converting bits and bytes of sequence information into meaningful insights remains a challenging enterprise. Biological systems operate on multiple hierarchical levels from genomes to biomes. Holistic understanding of biological systems requires agile software tools that permit comparative analyses across multiple information levels (DNA, RNA, protein, and metabolites) to identify emergent properties, diagnose system states, or predict responses to environmental change. Here we adopt the MetaPathways annotation and analysis pipeline and Pathway Tools to construct environmental pathway/genome databases (ePGDBs) that describe microbial community metabolism using MetaCyc, a highly curated database of metabolic pathways and components covering all domains of life. We evaluate Pathway Tools' performance on three datasets with different complexity and coding potential, including simulated metagenomes, a symbiotic system, and the Hawaii Ocean Time-series. We define accuracy and sensitivity relationships between read length, coverage and pathway recovery and evaluate the impact of taxonomic pruning on ePGDB construction and interpretation. Resulting ePGDBs provide interactive metabolic maps, predict emergent metabolic pathways associated with biosynthesis and energy production and differentiate between genomic potential and phenotypic expression across defined environmental gradients. This multi-tiered analysis provides the user community with specific operating guidelines, performance metrics and prediction hazards for more reliable ePGDB construction and interpretation. Moreover, it demonstrates the power of Pathway Tools in predicting metabolic interactions in natural and engineered ecosystems.
Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth
2013-01-01
Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890
Integrated multiscale biomaterials experiment and modelling: a perspective
Buehler, Markus J.; Genin, Guy M.
2016-01-01
Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Intercode comparison of gyrokinetic global electromagnetic modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Görler, T., E-mail: tobias.goerler@ipp.mpg.de; Tronko, N.; Hornsby, W. A.
Aiming to fill a corresponding lack of sophisticated test cases for global electromagnetic gyrokinetic codes, a new hierarchical benchmark is proposed. Starting from established test sets with adiabatic electrons, fully gyrokinetic electrons, and electrostatic fluctuations are taken into account before finally studying the global electromagnetic micro-instabilities. Results from up to five codes involving representatives from different numerical approaches as particle-in-cell methods, Eulerian and Semi-Lagrangian are shown. By means of spectrally resolved growth rates and frequencies and mode structure comparisons, agreement can be confirmed on ion-gyro-radius scales, thus providing confidence in the correct implementation of the underlying equations.
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
Jeffrey E. Schneiderman; Hong S. He; Frank R. Thompson; William D. Dijak; Jacob S. Fraser
2015-01-01
Tree species distribution and abundance are affected by forces operating across a hierarchy of ecological scales. Process and species distribution models have been developed emphasizing forces at different scales. Understanding model agreement across hierarchical scales provides perspective on prediction uncertainty and ultimately enables policy makers and managers to...
ERIC Educational Resources Information Center
Keegan, John P.; Chan, Fong; Ditchman, Nicole; Chiu, Chung-Yi
2012-01-01
The main objective of this study was to validate Pender's Health Promotion Model (HPM) as a motivational model for exercise/physical activity self-management for people with spinal cord injuries (SCIs). Quantitative descriptive research design using hierarchical regression analysis (HRA) was used. A total of 126 individuals with SCI were recruited…
Application of a Database System for Korean Military Personnel Management.
1987-03-01
PUNOINtGiSPONSORING 6b OFFICE SYMBOIL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (taoab 8c AOORE SS (city. Stare. MWd BP Code) 10...concepts ......... 33 C. R SHIIONSPS WITH THE A .TL. ................. ............................... 35 1. Tree or hierarchical relationships...between relation and data-processing concepts6 ............... 35 3.6 Example of Tree Relationship ......... .......................... 36 3.7
Lexical architecture based on a hierarchy of codes for high-speed string correction
NASA Astrophysics Data System (ADS)
de Bertrand de Beuvron, Francois; Trigano, Philippe
1992-03-01
AI systems for the general public have to be really tolerant to errors. These errors could be of several kinds: typographic, phonetic, grammatical, or semantic. A special lexical dictionary architecture has been designed to deal with the first two. It extends the hierarchical file method of E. Tanaka and Y. Kojima.
RNA 3D Structural Motifs: Definition, Identification, Annotation, and Database Searching
NASA Astrophysics Data System (ADS)
Nasalean, Lorena; Stombaugh, Jesse; Zirbel, Craig L.; Leontis, Neocles B.
Structured RNA molecules resemble proteins in the hierarchical organization of their global structures, folding and broad range of functions. Structured RNAs are composed of recurrent modular motifs that play specific functional roles. Some motifs direct the folding of the RNA or stabilize the folded structure through tertiary interactions. Others bind ligands or proteins or catalyze chemical reactions. Therefore, it is desirable, starting from the RNA sequence, to be able to predict the locations of recurrent motifs in RNA molecules. Conversely, the potential occurrence of one or more known 3D RNA motifs may indicate that a genomic sequence codes for a structured RNA molecule. To identify known RNA structural motifs in new RNA sequences, precise structure-based definitions are needed that specify the core nucleotides of each motif and their conserved interactions. By comparing instances of each recurrent motif and applying base pair isosteriCity relations, one can identify neutral mutations that preserve its structure and function in the contexts in which it occurs.
Le, My Thanh; van Veldhuizen, Mart; Porcelli, Ida; Bongaerts, Roy J.; Gaskin, Duncan J. H.; Pearson, Bruce M.; van Vliet, Arnoud H. M.
2015-01-01
Assembly of flagella requires strict hierarchical and temporal control via flagellar sigma and anti-sigma factors, regulatory proteins and the assembly complex itself, but to date non-coding RNAs (ncRNAs) have not been described to regulate genes directly involved in flagellar assembly. In this study we have investigated the possible role of two ncRNA paralogs (CjNC1, CjNC4) in flagellar assembly and gene regulation of the diarrhoeal pathogen Campylobacter jejuni. CjNC1 and CjNC4 are 37/44 nt identical and predicted to target the 5' untranslated region (5' UTR) of genes transcribed from the flagellar sigma factor σ54. Orthologs of the σ54-dependent 5' UTRs and ncRNAs are present in the genomes of other thermophilic Campylobacter species, and transcription of CjNC1 and CNC4 is dependent on the flagellar sigma factor σ28. Surprisingly, inactivation and overexpression of CjNC1 and CjNC4 did not affect growth, motility or flagella-associated phenotypes such as autoagglutination. However, CjNC1 and CjNC4 were able to mediate sequence-dependent, but Hfq-independent, partial repression of fluorescence of predicted target 5' UTRs in an Escherichia coli-based GFP reporter gene system. This hints towards a subtle role for the CjNC1 and CjNC4 ncRNAs in post-transcriptional gene regulation in thermophilic Campylobacter species, and suggests that the currently used phenotypic methodologies are insufficiently sensitive to detect such subtle phenotypes. The lack of a role of Hfq in the E. coli GFP-based system indicates that the CjNC1 and CjNC4 ncRNAs may mediate post-transcriptional gene regulation in ways that do not conform to the paradigms obtained from the Enterobacteriaceae. PMID:26512728
Brely, Lucas; Bosia, Federico; Pugno, Nicola M
2018-06-20
Contact unit size reduction is a widely studied mechanism as a means to improve adhesion in natural fibrillar systems, such as those observed in beetles or geckos. However, these animals also display complex structural features in the way the contact is subdivided in a hierarchical manner. Here, we study the influence of hierarchical fibrillar architectures on the load distribution over the contact elements of the adhesive system, and the corresponding delamination behaviour. We present an analytical model to derive the load distribution in a fibrillar system loaded in shear, including hierarchical splitting of contacts, i.e. a "hierarchical shear-lag" model that generalizes the well-known shear-lag model used in mechanics. The influence on the detachment process is investigated introducing a numerical procedure that allows the derivation of the maximum delamination force as a function of the considered geometry, including statistical variability of local adhesive energy. Our study suggests that contact splitting generates improved adhesion only in the ideal case of extremely compliant contacts. In real cases, to produce efficient adhesive performance, contact splitting needs to be coupled with hierarchical architectures to counterbalance high load concentrations resulting from contact unit size reduction, generating multiple delamination fronts and helping to avoid detrimental non-uniform load distributions. We show that these results can be summarized in a generalized adhesion scaling scheme for hierarchical structures, proving the beneficial effect of multiple hierarchical levels. The model can thus be used to predict the adhesive performance of hierarchical adhesive structures, as well as the mechanical behaviour of composite materials with hierarchical reinforcements.
Hierarchical majorana neutrinos from democratic mass matrices
NASA Astrophysics Data System (ADS)
Yang, Masaki J. S.
2016-09-01
In this paper, we obtain the light neutrino masses and mixings consistent with the experiments, in the democratic texture approach. The essential ansatz is that νRi are assumed to transform as ;right-handed fields; 2R +1R under the S3L ×S3R symmetry. The symmetry breaking terms are assumed to be diagonal and hierarchical. This setup only allows the normal hierarchy of the neutrino mass, and excludes both of inverted hierarchical and degenerated neutrinos. Although the neutrino sector has nine free parameters, several predictions are obtained at the leading order. When we neglect the smallest parameters ζν and ζR, all components of the mixing matrix UPMNS are expressed by the masses of light neutrinos and charged leptons. From the consistency between predicted and observed UPMNS, we obtain the lightest neutrino masses m1 = (1.1 → 1.4) meV, and the effective mass for the double beta decay 〈mee 〉 ≃ 4.5 meV.
Standardization of a Hierarchical Transactive Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammerstrom, Donald J.; Oliver, Terry V.; Melton, Ronald B.
2010-12-03
The authors describe work they have conducted toward the generalization and standardization of the transactive control approach that was first demonstrated in the Olympic Peninsula Project for the management of a transmission constraint. The newly generalized approach addresses several potential shortfalls of the prior approach: First, the authors have formalized a hierarchical node structure which defines the nodes and the functional signal pathways between these nodes. Second, by fully generalizing the inputs, outputs, and functional responsibilities of each node, the authors make the approach available to a much wider set of responsive assets and operational objectives. Third, the new, generalizedmore » approach defines transactive signals that include the predicted day-ahead future. This predictive feature allows the market-like bids and offers to become resolved iteratively over time, thus allowing the behaviors of responsive assets to be called upon both for the present and as future dispatch decisions are being made. The hierarchical transactive control approach is a key feature of a proposed Pacific Northwest smart grid demonstration.« less
Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell
2008-07-01
Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.
Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.
Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh
2017-07-03
Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.
A method of transition conflict resolving in hierarchical control
NASA Astrophysics Data System (ADS)
Łabiak, Grzegorz
2016-09-01
The paper concerns the problem of automatic solving of transition conflicts in hierarchical concurrent state machines (also known as UML state machine). Preparing by the designer a formal specification of a behaviour free from conflicts can be very complex. In this paper, it is proposed a method for solving conflicts through transition predicates modification. Partially specified predicates in the nondeterministic diagram are transformed into a symbolic Boolean space, whose points of the space code all possible valuations of transition predicates. Next, all valuations under partial specifications are logically multiplied by a function which represents all possible orthogonal predicate valuations. The result of this operation contains all possible collections of predicates, which under given partial specification make that the original diagram is conflict free and deterministic.
Assessment of Current Jet Noise Prediction Capabilities
NASA Technical Reports Server (NTRS)
Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas
2008-01-01
An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Using an Ecological Land Hierarchy to Predict Seasonal-Wetland Abundance in Upland Forests
Brian J. Palik; Richard Buech; Leanne Egeland
2003-01-01
Hierarchy theory, when applied to landscapes, predicts that broader-scale ecosystems constrain the development of finer-scale, nested ecosystems. This prediction finds application in hierarchical land classifications. Such classifications typically apply to physiognomically similar ecosystems, or ecological land units, e.g., a set of multi-scale forest ecosystems. We...
Point-Mass Aircraft Trajectory Prediction Using a Hierarchical, Highly-Adaptable Software Design
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Woods, Sharon E.; Wing, David J.
2017-01-01
A highly adaptable and extensible method for predicting four-dimensional trajectories of civil aircraft has been developed. This method, Behavior-Based Trajectory Prediction, is based on taxonomic concepts developed for the description and comparison of trajectory prediction software. A hierarchical approach to the "behavioral" layer of a point-mass model of aircraft flight, a clear separation between the "behavioral" and "mathematical" layers of the model, and an abstraction of the methods of integrating differential equations in the "mathematical" layer have been demonstrated to support aircraft models of different types (in particular, turbojet vs. turboprop aircraft) using performance models at different levels of detail and in different formats, and promise to be easily extensible to other aircraft types and sources of data. The resulting trajectories predict location, altitude, lateral and vertical speeds, and fuel consumption along the flight path of the subject aircraft accurately and quickly, accounting for local conditions of wind and outside air temperature. The Behavior-Based Trajectory Prediction concept was implemented in NASA's Traffic Aware Planner (TAP) flight-optimizing cockpit software application.
Predicting Resilience in Sexually Abused Adolescents
ERIC Educational Resources Information Center
Williams, Javonda; Nelson-Gardell, Debra
2012-01-01
This research examined factors that predicted resilience in sexually abused adolescents. Using Bronfenbrenner's Process-Person-Context-Time (PPCT) ecological model, this study considered the proximal and distal factors that would contribute to adolescents' reactions to sexual victimization. This correlational study used hierarchical regression…
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Statistical shear lag model - unraveling the size effect in hierarchical composites.
Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D
2015-05-01
Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur
2018-03-01
Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.
Hierarchical prediction errors in midbrain and septum during social learning.
Diaconescu, Andreea O; Mathys, Christoph; Weber, Lilian A E; Kasper, Lars; Mauer, Jan; Stephan, Klaas E
2017-04-01
Social learning is fundamental to human interactions, yet its computational and physiological mechanisms are not well understood. One prominent open question concerns the role of neuromodulatory transmitters. We combined fMRI, computational modelling and genetics to address this question in two separate samples (N = 35, N = 47). Participants played a game requiring inference on an adviser's intentions whose motivation to help or mislead changed over time. Our analyses suggest that hierarchically structured belief updates about current advice validity and the adviser's trustworthiness, respectively, depend on different neuromodulatory systems. Low-level prediction errors (PEs) about advice accuracy not only activated regions known to support 'theory of mind', but also the dopaminergic midbrain. Furthermore, PE responses in ventral striatum were influenced by the Met/Val polymorphism of the Catechol-O-Methyltransferase (COMT) gene. By contrast, high-level PEs ('expected uncertainty') about the adviser's fidelity activated the cholinergic septum. These findings, replicated in both samples, have important implications: They suggest that social learning rests on hierarchically related PEs encoded by midbrain and septum activity, respectively, in the same manner as other forms of learning under volatility. Furthermore, these hierarchical PEs may be broadcast by dopaminergic and cholinergic projections to induce plasticity specifically in cortical areas known to represent beliefs about others. © The Author (2017). Published by Oxford University Press.
Hierarchical prediction errors in midbrain and septum during social learning
Mathys, Christoph; Weber, Lilian A. E.; Kasper, Lars; Mauer, Jan; Stephan, Klaas E.
2017-01-01
Abstract Social learning is fundamental to human interactions, yet its computational and physiological mechanisms are not well understood. One prominent open question concerns the role of neuromodulatory transmitters. We combined fMRI, computational modelling and genetics to address this question in two separate samples (N = 35, N = 47). Participants played a game requiring inference on an adviser’s intentions whose motivation to help or mislead changed over time. Our analyses suggest that hierarchically structured belief updates about current advice validity and the adviser’s trustworthiness, respectively, depend on different neuromodulatory systems. Low-level prediction errors (PEs) about advice accuracy not only activated regions known to support ‘theory of mind’, but also the dopaminergic midbrain. Furthermore, PE responses in ventral striatum were influenced by the Met/Val polymorphism of the Catechol-O-Methyltransferase (COMT) gene. By contrast, high-level PEs (‘expected uncertainty’) about the adviser’s fidelity activated the cholinergic septum. These findings, replicated in both samples, have important implications: They suggest that social learning rests on hierarchically related PEs encoded by midbrain and septum activity, respectively, in the same manner as other forms of learning under volatility. Furthermore, these hierarchical PEs may be broadcast by dopaminergic and cholinergic projections to induce plasticity specifically in cortical areas known to represent beliefs about others. PMID:28119508
Phillips, Holly N; Blenkmann, Alejandro; Hughes, Laura E; Kochen, Silvia; Bekinschtein, Tristan A; Cam-Can; Rowe, James B
2016-09-01
We propose that sensory inputs are processed in terms of optimised predictions and prediction error signals within hierarchical neurocognitive models. The combination of non-invasive brain imaging and generative network models has provided support for hierarchical frontotemporal interactions in oddball tasks, including recent identification of a temporal expectancy signal acting on prefrontal cortex. However, these studies are limited by the need to invert magnetoencephalographic or electroencephalographic sensor signals to localise activity from cortical 'nodes' in the network, or to infer neural responses from indirect measures such as the fMRI BOLD signal. To overcome this limitation, we examined frontotemporal interactions estimated from direct cortical recordings from two human participants with cortical electrode grids (electrocorticography - ECoG). Their frontotemporal network dynamics were compared to those identified by magnetoencephalography (MEG) in forty healthy adults. All participants performed the same auditory oddball task with standard tones interspersed with five deviant tone types. We normalised post-operative electrode locations to standardised anatomic space, to compare across modalities, and inverted the MEG to cortical sources using the estimated lead field from subject-specific head models. A mismatch negativity signal in frontal and temporal cortex was identified in all subjects. Generative models of the electrocorticographic and magnetoencephalographic data were separately compared using the free-energy estimate of the model evidence. Model comparison confirmed the same critical features of hierarchical frontotemporal networks in each patient as in the group-wise MEG analysis. These features included bilateral, feedforward and feedback frontotemporal modulated connectivity, in addition to an asymmetric expectancy driving input on left frontal cortex. The invasive ECoG provides an important step in construct validation of the use of neural generative models of MEG, which in turn enables generalisation to larger populations. Together, they give convergent evidence for the hierarchical interactions in frontotemporal networks for expectation and processing of sensory inputs. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
Genomics dataset of unidentified disclosed isolates.
Rekadwad, Bhagwan N
2016-09-01
Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis.
Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Tew, Roy C.
1992-01-01
Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.
2014-01-01
Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90% of spine patients into their respective SPORT cohorts. Therefore, claims data appears to be a reasonably valid approach to classifying patients by surgical indication. PMID:24525995
Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software
NASA Technical Reports Server (NTRS)
Tilton, James C.
2003-01-01
A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.
ERIC Educational Resources Information Center
Frey, Karin S.; Hirschstein, Miriam K.; Snell, Jennie L.; Edstrom, Leihua Van Schoiack; MacKenzie, Elizabeth P.; Broderick, Carole J.
2005-01-01
Six schools were randomly assigned to a multilevel bullying intervention or a control condition. Children in Grades 3-6 (N=1,023) completed pre- and posttest surveys of behaviors and beliefs and were rated by teachers. Observers coded playground behavior of a random subsample (n=544). Hierarchical analyses of changes in playground behavior…
ERIC Educational Resources Information Center
Gilstrap, Livia L.
2004-01-01
Despite suggestibility researchers' focus on adult behaviors that distort children's reports, whether behaviors examined in experimental work are used in the field is unknown. The current study presents a mutually exclusive and exhaustive hierarchical coding system that reflects interview questioning behaviors of concern in experimental work. The…
ERIC Educational Resources Information Center
Han, Jisu; Schlieber, Marisa; Gregory, Bradley
2017-01-01
This study used data from the Head Start Family and Child Experiences Survey (FACES) 2009 4-year-old cohort to examine associations among family characteristics, home and classroom environments, and the emergent literacy skills of Head Start children. Results from hierarchical linear models suggest that both family and classroom contexts play a…
Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.
Bidelman, Gavin M; Grall, Jeremy
2014-11-01
Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.
Efficient Parallel Formulations of Hierarchical Methods and Their Applications
NASA Astrophysics Data System (ADS)
Grama, Ananth Y.
1996-01-01
Hierarchical methods such as the Fast Multipole Method (FMM) and Barnes-Hut (BH) are used for rapid evaluation of potential (gravitational, electrostatic) fields in particle systems. They are also used for solving integral equations using boundary element methods. The linear systems arising from these methods are dense and are solved iteratively. Hierarchical methods reduce the complexity of the core matrix-vector product from O(n^2) to O(n log n) and the memory requirement from O(n^2) to O(n). We have developed highly scalable parallel formulations of a hybrid FMM/BH method that are capable of handling arbitrarily irregular distributions. We apply these formulations to astrophysical simulations of Plummer and Gaussian galaxies. We have used our parallel formulations to solve the integral form of the Laplace equation. We show that our parallel hierarchical mat-vecs yield high efficiency and overall performance even on relatively small problems. A problem containing approximately 200K nodes takes under a second to compute on 256 processors and yet yields over 85% efficiency. The efficiency and raw performance is expected to increase for bigger problems. For the 200K node problem, our code delivers about 5 GFLOPS of performance on a 256 processor T3D. This is impressive considering the fact that the problem has floating point divides and roots, and very little locality resulting in poor cache performance. A dense matrix-vector product of the same dimensions would require about 0.5 TeraBytes of memory and about 770 TeraFLOPS of computing speed. Clearly, if the loss in accuracy resulting from the use of hierarchical methods is acceptable, our code yields significant savings in time and memory. We also study the convergence of a GMRES solver built around this mat-vec. We accelerate the convergence of the solver using three preconditioning techniques: diagonal scaling, block-diagonal preconditioning, and inner-outer preconditioning. We study the performance and parallel efficiency of these preconditioned solvers. Using this solver, we solve dense linear systems with hundreds of thousands of unknowns. Solving a 105K unknown problem takes about 10 minutes on a 64 processor T3D. Until very recently, boundary element problems of this magnitude could not even be generated, let alone solved.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
A component-centered meta-analysis of family-based prevention programs for adolescent substance use.
Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien
2016-04-01
Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Attachment and Family Processes in Children's Psychological Adjustment in Middle Childhood.
Demby, Kimberly P; Riggs, Shelley A; Kaminski, Patricia L
2017-03-01
This study examined the links between parent-child attachment, whole family interaction patterns, and child emotional adjustment and adaptability in a sample of 86 community families with children between the ages of 8 and 11 years. Family interactions were observed and coded with the System for Coding Interactions and Family Functioning (SCIFF; Lindahl, 2001). Both parents and each target child completed the appropriate form of the Behavior Assessment System for Children-2nd Edition (BASC-2; Reynolds & Kamphaus, 2004). Target children also completed the Children's Coping Strategies Questionnaire (CCSQ; Yunger, Corby, & Perry, 2005). Hierarchical multiple regressions indicated that Secure mother-child attachment was a robust predictor of children's emotional symptoms, but father-child attachment strategies were not significant independent predictors. Positive Affect in family interactions significantly increased the amount of variance accounted for in children's emotional symptoms. In addition, Family Cohesion and Positive Affect moderated the relationship between father-child attachment and children's emotional symptoms. When data from all BASC-2 informants (mother, father, child) were considered simultaneously and multidimensional constructs were modeled, mother-child security directly predicted children's adjustment and adaptive skills, but the influence of father-child security was fully mediated through positive family functioning. Results of the current study support the utility of considering dyadic attachment and family interaction patterns conjointly when conceptualizing and fostering positive emotional and behavioral outcomes in children. © 2015 Family Process Institute.
Low-delay predictive audio coding for the HIVITS HDTV codec
NASA Astrophysics Data System (ADS)
McParland, A. K.; Gilchrist, N. H. C.
1995-01-01
The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.
Overview of Recent Radiation Transport Code Comparisons for Space Applications
NASA Astrophysics Data System (ADS)
Townsend, Lawrence
Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
A parallel and modular deformable cell Car-Parrinello code
NASA Astrophysics Data System (ADS)
Cavazzoni, Carlo; Chiarotti, Guido L.
1999-12-01
We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.
Li, Han; Liu, Yashu; Gong, Pinghua; Zhang, Changshui; Ye, Jieping
2014-01-01
Identifying patients with Mild Cognitive Impairment (MCI) who are likely to convert to dementia has recently attracted increasing attention in Alzheimer's disease (AD) research. An accurate prediction of conversion from MCI to AD can aid clinicians to initiate treatments at early stage and monitor their effectiveness. However, existing prediction systems based on the original biosignatures are not satisfactory. In this paper, we propose to fit the prediction models using pairwise biosignature interactions, thus capturing higher-order relationship among biosignatures. Specifically, we employ hierarchical constraints and sparsity regularization to prune the high-dimensional input features. Based on the significant biosignatures and underlying interactions identified, we build classifiers to predict the conversion probability based on the selected features. We further analyze the underlying interaction effects of different biosignatures based on the so-called stable expectation scores. We have used 293 MCI subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) database that have MRI measurements at the baseline to evaluate the effectiveness of the proposed method. Our proposed method achieves better classification performance than state-of-the-art methods. Moreover, we discover several significant interactions predictive of MCI-to-AD conversion. These results shed light on improving the prediction performance using interaction features. PMID:24416143
Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L
2004-04-01
The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.
Predicting Chemical Toxicity from Proteomics and Computational Chemistry
2008-07-30
similarity spaces, BD Gute and SC Basak, SAR QSAR Environ. Res., 17, 37-51 (2006). Predicting pharmacological and toxicological activity of heterocyclic...affinity of dibenzofurans: a hierarchical QSAR approach, authored jointly by Basak and Mills; Division of Chemical Toxicology iii. Prediction of blood...biodescriptors vis-ä-vis chemodescriptors in predictive toxicology e) Development of integrated QSTR models using the combined set of chemodescriptors and
Fatakia, Sarosh N.; Mehta, Ishita S.; Rao, Basuthkar J.
2016-01-01
Forty-six chromosome territories (CTs) are positioned uniquely in human interphase nuclei, wherein each of their positions can range from the centre of the nucleus to its periphery. A non-empirical basis for their non-random arrangement remains unreported. Here, we derive a suprachromosomal basis of that overall arrangement (which we refer to as a CT constellation), and report a hierarchical nature of the same. Using matrix algebra, we unify intrinsic chromosomal parameters (e.g., chromosomal length, gene density, the number of genes per chromosome), to derive an extrinsic effective gene density matrix, the hierarchy of which is dominated largely by extrinsic mathematical coupling of HSA19, followed by HSA17 (human chromosome 19 and 17, both preferentially interior CTs) with all CTs. We corroborate predicted constellations and effective gene density hierarchy with published reports from fluorescent in situ hybridization based microscopy and Hi-C techniques, and delineate analogous hierarchy in disparate vertebrates. Our theory accurately predicts CTs localised to the nuclear interior, which interestingly share conserved synteny with HSA19 and/or HSA17. Finally, the effective gene density hierarchy dictates how permutations among CT position represents the plasticity within its constellations, based on which we suggest that a differential mix of coding with noncoding genome modulates the same. PMID:27845379
Narratives in Two Languages: Storytelling of Bilingual Cantonese-English Preschoolers.
Rezzonico, Stefano; Goldberg, Ahuva; Mak, Katy Ka-Yan; Yap, Stephanie; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi
2016-06-01
The aim of this study was to compare narratives generated by 4-year-old and 5-year-old children who were bilingual in English and Cantonese. The sample included 47 children (23 who were 4 years old and 24 who were 5 years old) living in Toronto, Ontario, Canada, who spoke both Cantonese and English. The participants spoke and heard predominantly Cantonese in the home. Participants generated a story in English and Cantonese by using a wordless picture book; language order was counterbalanced. Data were transcribed and coded for story grammar, morphosyntactic quality, mean length of utterance in words, and the number of different words. Repeated measures analysis of variance revealed higher story grammar scores in English than in Cantonese, but no other significant main effects of language were observed. Analyses also revealed that older children had higher story grammar, mean length of utterance in words, and morphosyntactic quality scores than younger children in both languages. Hierarchical regressions indicated that Cantonese story grammar predicted English story grammar and Cantonese microstructure predicted English microstructure. However, no correlation was observed between Cantonese and English morphosyntactic quality. The results of this study have implications for speech-language pathologists who collect narratives in Cantonese and English from bilingual preschoolers. The results suggest that there is a possible transfer in narrative abilities between the two languages.
Jacobson, Peggy F; Walden, Patrick R
2013-08-01
This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Narrative samples were obtained from 48 bilingual children in both of their languages using the suggested narrative retell protocol and coding conventions as per Systematic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2008) software. An additional lexical diversity measure, VocD, was also calculated. A series of logistical hierarchical regressions explored the utility of the number of different words, VocD statistic, and word and morpheme omissions in each language for predicting language status. Omission errors turned out to be the best predictors of bilingual language impairment at all ages, and this held true across languages. Although lexical diversity measures did not predict typical or atypical language status, the measures were significantly related to oral language proficiency in English and Spanish. The results underscore the significance of omission errors in bilingual language impairment while simultaneously revealing the limitations of lexical diversity measures as indicators of impairment. The relationship between lexical diversity and oral language proficiency highlights the importance of considering relative language proficiency in bilingual assessment.
Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model
NASA Astrophysics Data System (ADS)
Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna
2017-06-01
Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.
Ideal Teacher Behaviors: Student Motivation and Self-Efficacy Predict Preferences
ERIC Educational Resources Information Center
Komarraju, Meera
2013-01-01
Differences in students' academic self-efficacy and motivation were examined in predicting preferred teacher traits. Undergraduates (261) completed the Teaching Behavior Checklist, Academic Self-Concept scale, and Academic Motivation scale. Hierarchical regression analyses indicated that academic self-efficacy and extrinsic motivation explained…
Leading virtual teams: hierarchical leadership, structural supports, and shared team leadership.
Hoch, Julia E; Kozlowski, Steve W J
2014-05-01
Using a field sample of 101 virtual teams, this research empirically evaluates the impact of traditional hierarchical leadership, structural supports, and shared team leadership on team performance. Building on Bell and Kozlowski's (2002) work, we expected structural supports and shared team leadership to be more, and hierarchical leadership to be less, strongly related to team performance when teams were more virtual in nature. As predicted, results from moderation analyses indicated that the extent to which teams were more virtual attenuated relations between hierarchical leadership and team performance but strengthened relations for structural supports and team performance. However, shared team leadership was significantly related to team performance regardless of the degree of virtuality. Results are discussed in terms of needed research extensions for understanding leadership processes in virtual teams and practical implications for leading virtual teams. (c) 2014 APA, all rights reserved.
Interactive metagenomic visualization in a Web browser
2011-01-01
Background A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Results Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Conclusions Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net. PMID:21961884
Bit selection using field drilling data and mathematical investigation
NASA Astrophysics Data System (ADS)
Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.
2018-03-01
A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.
Braverman, Ami; Berger, Andrea; Meiran, Nachshon
2014-07-01
According to "hierarchical" multi-step theories, response selection is preceded by a decision regarding which task rule should be executed. Other theories assume a "flat" single-step architecture in which task information and stimulus information are simultaneously considered. Using task switching, the authors independently manipulated two kinds of conflict: task conflict (with information that potentially triggers the relevant or the competing task rule/identity) and response conflict (with information that potentially triggers the relevant or the competing response code/motor response). Event related potentials indicated that the task conflict effect began before the response conflict effect and carried on in parallel with it. These results are more in line with the hierarchical view. Copyright © 2014 Elsevier Inc. All rights reserved.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Commitment Predictors: Long-Distance versus Geographically Close Relationships
ERIC Educational Resources Information Center
Pistole, M. Carole; Roberts, Amber; Mosko, Jonathan E.
2010-01-01
In this web-based study, the authors examined long-distance relationships (LDRs) and geographically close relationships (GCRs). Two hierarchical multiple regressions (N = 138) indicated that attachment predicted LDR and GCR commitment in Step 1. Final equations indicated that high satisfaction and investments predicted LDR commitment, whereas low…
Taamneh, Madhar; Taamneh, Salah; Alkheder, Sharaf
2017-09-01
Artificial neural networks (ANNs) have been widely used in predicting the severity of road traffic crashes. All available information about previously occurred accidents is typically used for building a single prediction model (i.e., classifier). Too little attention has been paid to the differences between these accidents, leading, in most cases, to build less accurate predictors. Hierarchical clustering is a well-known clustering method that seeks to group data by creating a hierarchy of clusters. Using hierarchical clustering and ANNs, a clustering-based classification approach for predicting the injury severity of road traffic accidents was proposed. About 6000 road accidents occurred over a six-year period from 2008 to 2013 in Abu Dhabi were used throughout this study. In order to reduce the amount of variation in data, hierarchical clustering was applied on the data set to organize it into six different forms, each with different number of clusters (i.e., clusters from 1 to 6). Two ANN models were subsequently built for each cluster of accidents in each generated form. The first model was built and validated using all accidents (training set), whereas only 66% of the accidents were used to build the second model, and the remaining 34% were used to test it (percentage split). Finally, the weighted average accuracy was computed for each type of models in each from of data. The results show that when testing the models using the training set, clustering prior to classification achieves (11%-16%) more accuracy than without using clustering, while the percentage split achieves (2%-5%) more accuracy. The results also suggest that partitioning the accidents into six clusters achieves the best accuracy if both types of models are taken into account.
Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen
2018-09-01
We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.
Zhang, Yinsheng; Zhang, Guoming
2018-01-01
A terminology (or coding system) is a formal set of controlled vocabulary in a specific domain. With a well-defined terminology, each concept in the target domain is assigned with a unique code, which can be identified and processed across different medical systems in an unambiguous way. Though there are lots of well-known biomedical terminologies, there is currently no domain-specific terminology for ROP (retinopathy of prematurity). Based on a collection of historical ROP patients' data in the electronic medical record system, we extracted the most frequent terms in the domain and organized them into a hierarchical coding system-ROP Minimal Standard Terminology, which contains 62 core concepts in 4 categories. This terminology has been successfully used to provide highly structured and semantic-rich clinical data in several ROP-related applications.
NASA Technical Reports Server (NTRS)
Keba, John E.
1996-01-01
Rotordynamic coefficients obtained from testing two different hydrostatic bearings are compared to values predicted by two different computer programs. The first set of test data is from a relatively long (L/D=1) orifice compensated hydrostatic bearing tested in water by Texas A&M University (TAMU Bearing No.9). The second bearing is a shorter (L/D=.37) bearing and was tested in a lower viscosity fluid by Rocketdyne Division of Rockwell (Rocketdyne 'Generic' Bearing) at similar rotating speeds and pressures. Computed predictions of bearing rotordynamic coefficients were obtained from the cylindrical seal code 'ICYL', one of the industrial seal codes developed for NASA-LeRC by Mechanical Technology Inc., and from the hydrodynamic bearing code 'HYDROPAD'. The comparison highlights the difference the bearing has on the accuracy of the predictions. The TAMU Bearing No. 9 test data is closely matched by the predictions obtained for the HYDROPAD code (except for added mass terms) whereas significant differences exist between the data from the Rocketdyne 'Generic' bearing the code predictions. The results suggest that some aspects of the fluid behavior in the shorter, higher Reynolds Number 'Generic' bearing may not be modeled accurately in the codes. The ICYL code predictions for flowrate and direct stiffness approximately equal those of HYDROPAD. Significant differences in cross-coupled stiffness and the damping terms were obtained relative to HYDROPAD and both sets of test data. Several observations are included concerning application of the ICYL code.
Theory of Mind: A Neural Prediction Problem
Koster-Hale, Jorie; Saxe, Rebecca
2014-01-01
Predictive coding posits that neural systems make forward-looking predictions about incoming information. Neural signals contain information not about the currently perceived stimulus, but about the difference between the observed and the predicted stimulus. We propose to extend the predictive coding framework from high-level sensory processing to the more abstract domain of theory of mind; that is, to inferences about others’ goals, thoughts, and personalities. We review evidence that, across brain regions, neural responses to depictions of human behavior, from biological motion to trait descriptions, exhibit a key signature of predictive coding: reduced activity to predictable stimuli. We discuss how future experiments could distinguish predictive coding from alternative explanations of this response profile. This framework may provide an important new window on the neural computations underlying theory of mind. PMID:24012000
Building hierarchical models of avian distributions for the State of Georgia
Howell, J.E.; Peterson, J.T.; Conroy, M.J.
2008-01-01
To predict the distributions of breeding birds in the state of Georgia, USA, we built hierarchical models consisting of 4 levels of nested mapping units of decreasing area: 90,000 ha, 3,600 ha, 144 ha, and 5.76 ha. We used the Partners in Flight database of point counts to generate presence and absence data at locations across the state of Georgia for 9 avian species: Acadian flycatcher (Empidonax virescens), brownheaded nuthatch (Sitta pusilla), Carolina wren (Thryothorus ludovicianus), indigo bunting (Passerina cyanea), northern cardinal (Cardinalis cardinalis), prairie warbler (Dendroica discolor), yellow-billed cuckoo (Coccyxus americanus), white-eyed vireo (Vireo griseus), and wood thrush (Hylocichla mustelina). At each location, we estimated hierarchical-level-specific habitat measurements using the Georgia GAP Analysis18 class land cover and other Geographic Information System sources. We created candidate, species-specific occupancy models based on previously reported relationships, and fit these using Markov chain Monte Carlo procedures implemented in OpenBugs. We then created a confidence model set for each species based on Akaike's Information Criterion. We found hierarchical habitat relationships for all species. Three-fold cross-validation estimates of model accuracy indicated an average overall correct classification rate of 60.5%. Comparisons with existing Georgia GAP Analysis models indicated that our models were more accurate overall. Our results provide guidance to wildlife scientists and managers seeking predict avian occurrence as a function of local and landscape-level habitat attributes.
Interplay Between Conceptual Expectations and Movement Predictions Underlies Action Understanding.
Ondobaka, Sasha; de Lange, Floris P; Wittmann, Marco; Frith, Chris D; Bekkering, Harold
2015-09-01
Recent accounts of understanding goal-directed action underline the importance of a hierarchical predictive architecture. However, the neural implementation of such an architecture remains elusive. In the present study, we used functional neuroimaging to quantify brain activity associated with predicting physical movements, as they were modulated by conceptual-expectations regarding the purpose of the object involved in the action. Participants observed object-related actions preceded by a cue that generated both conceptual goal expectations and movement goal predictions. In 2 tasks, observers judged whether conceptual or movement goals matched or mismatched the cue. At the conceptual level, expected goals specifically recruited the posterior cingulate cortex, irrespectively of the task and the perceived movement goal. At the movement level, neural activation of the parieto-frontal circuit, including inferior frontal gyrus and the inferior parietal lobe, reflected unpredicted movement goals. Crucially, this movement prediction error was only present when the purpose of the involved object was expected. These findings provide neural evidence that prior conceptual expectations influence processing of physical movement goals and thereby support the hierarchical predictive account of action processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Prediction of plant lncRNA by ensemble machine learning classifiers.
Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian
2018-05-02
In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.
NASA Astrophysics Data System (ADS)
Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi
2017-01-01
Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.
DRA/NASA/ONERA Collaboration on Icing Research. Part 2; Prediction of Airfoil Ice Accretion
NASA Technical Reports Server (NTRS)
Wright, William B.; Gent, R. W.; Guffond, Didier
1997-01-01
This report presents results from a joint study by DRA, NASA, and ONERA for the purpose of comparing, improving, and validating the aircraft icing computer codes developed by each agency. These codes are of three kinds: (1) water droplet trajectory prediction, (2) ice accretion modeling, and (3) transient electrothermal deicer analysis. In this joint study, the agencies compared their code predictions with each other and with experimental results. These comparison exercises were published in three technical reports, each with joint authorship. DRA published and had first authorship of Part 1 - Droplet Trajectory Calculations, NASA of Part 2 - Ice Accretion Prediction, and ONERA of Part 3 - Electrothermal Deicer Analysis. The results cover work done during the period from August 1986 to late 1991. As a result, all of the information in this report is dated. Where necessary, current information is provided to show the direction of current research. In this present report on ice accretion, each agency predicted ice shapes on two dimensional airfoils under icing conditions for which experimental ice shapes were available. In general, all three codes did a reasonable job of predicting the measured ice shapes. For any given experimental condition, one of the three codes predicted the general ice features (i.e., shape, impingement limits, mass of ice) somewhat better than did the other two. However, no single code consistently did better than the other two over the full range of conditions examined, which included rime, mixed, and glaze ice conditions. In several of the cases, DRA showed that the user's knowledge of icing can significantly improve the accuracy of the code prediction. Rime ice predictions were reasonably accurate and consistent among the codes, because droplets freeze on impact and the freezing model is simple. Glaze ice predictions were less accurate and less consistent among the codes, because the freezing model is more complex and is critically dependent upon unsubstantiated heat transfer and surface roughness models. Thus, heat transfer prediction methods used in the codes became the subject for a separate study in this report to compare predicted heat transfer coefficients with a limited experimental database of heat transfer coefficients for cylinders with simulated glaze and rime ice shapes. The codes did a good job of predicting heat transfer coefficients near the stagnation region of the ice shapes. But in the region of the ice horns, all three codes predicted heat transfer coefficients considerably higher than the measured values. An important conclusion of this study is that further research is needed to understand the finer detail of of the glaze ice accretion process and to develop improved glaze ice accretion models.
Cohen, Mark E; Dimick, Justin B; Bilimoria, Karl Y; Ko, Clifford Y; Richards, Karen; Hall, Bruce Lee
2009-12-01
Although logistic regression has commonly been used to adjust for risk differences in patient and case mix to permit quality comparisons across hospitals, hierarchical modeling has been advocated as the preferred methodology, because it accounts for clustering of patients within hospitals. It is unclear whether hierarchical models would yield important differences in quality assessments compared with logistic models when applied to American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) data. Our objective was to evaluate differences in logistic versus hierarchical modeling for identifying hospitals with outlying outcomes in the ACS-NSQIP. Data from ACS-NSQIP patients who underwent colorectal operations in 2008 at hospitals that reported at least 100 operations were used to generate logistic and hierarchical prediction models for 30-day morbidity and mortality. Differences in risk-adjusted performance (ratio of observed-to-expected events) and outlier detections from the two models were compared. Logistic and hierarchical models identified the same 25 hospitals as morbidity outliers (14 low and 11 high outliers), but the hierarchical model identified 2 additional high outliers. Both models identified the same eight hospitals as mortality outliers (five low and three high outliers). The values of observed-to-expected events ratios and p values from the two models were highly correlated. Results were similar when data were permitted from hospitals providing < 100 patients. When applied to ACS-NSQIP data, logistic and hierarchical models provided nearly identical results with respect to identification of hospitals' observed-to-expected events ratio outliers. As hierarchical models are prone to implementation problems, logistic regression will remain an accurate and efficient method for performing risk adjustment of hospital quality comparisons.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Predicting South Korean University Students' Happiness through Social Support and Efficacy Beliefs
ERIC Educational Resources Information Center
Lee, Diane Sookyoung; Padilla, Amado M.
2016-01-01
This study investigated the adversity and coping experiences of 198 South Korean university students and takes a cultural lens in understanding how social and individual factors shape their happiness. Hierarchical linear regression analyses suggest that Korean students' perceptions of social support significantly predicted their happiness,…
Developmental Screening Referrals: Child and Family Factors that Predict Referral Completion
ERIC Educational Resources Information Center
Jennings, Danielle J.; Hanline, Mary Frances
2013-01-01
This study researched the predictive impact of developmental screening results and the effects of child and family characteristics on completion of referrals given for evaluation. Logistical and hierarchical logistic regression analyses were used to determine the significance of 10 independent variables on the predictor variable. The number of…
Psychosocial Factors Predicting First-Year College Student Success
ERIC Educational Resources Information Center
Krumrei-Mancuso, Elizabeth J.; Newton, Fred B.; Kim, Eunhee; Wilcox, Dan
2013-01-01
This study made use of a model of college success that involves students achieving academic goals and life satisfaction. Hierarchical regressions examined the role of six psychosocial factors for college success among 579 first-year college students. Academic self-efficacy and organization and attention to study were predictive of first semester…
Predicting Successful Mathematics Remediation among Latina/o Students
ERIC Educational Resources Information Center
Crisp, Gloria; Reyes, Nicole Alia Salis; Doran, Erin
2017-01-01
This study examines Latina/o students' remedial math needs and outcomes. Data were drawn from a national sample of Latina/o students. Hierarchical generalized linear modeling techniques were used to predict three successful remediation outcomes. Results highlight the importance of providing financial aid and academic support to Latina/o students,…
Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet
NASA Technical Reports Server (NTRS)
Koch, L. Danielle; Bridges, James; Khavaran, Abbas
2002-01-01
To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.
1998-03-01
The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less
Method for rapid high-frequency seismogram calculation
NASA Astrophysics Data System (ADS)
Stabile, Tony Alfredo; De Matteis, Raffaella; Zollo, Aldo
2009-02-01
We present a method for rapid, high-frequency seismogram calculation that makes use of an algorithm to automatically generate an exhaustive set of seismic phases with an appreciable amplitude on the seismogram. The method uses a hierarchical order of ray and seismic-phase generation, taking into account some existing constraints for ray paths and some physical constraints. To compute synthetic seismograms, the COMRAD code (from the Italian: "COdice Multifase per il RAy-tracing Dinamico") uses as core a dynamic ray-tracing code. To validate the code, we have computed in a layered medium synthetic seismograms using both COMRAD and a code that computes the complete wave field by the discrete wave number method. The seismograms are compared according to a time-frequency misfit criteria based on the continuous wavelet transform of the signals. Although the number of phases is considerably reduced by the selection criteria, the results show that the loss in amplitude on the whole seismogram is negligible. Moreover, the time for the computing of the synthetics using the COMRAD code (truncating the ray series at the 10th generation) is 3-4-fold less than that needed for the AXITRA code (up to a frequency of 25 Hz).
Ground-state coding in partially connected neural networks
NASA Technical Reports Server (NTRS)
Baram, Yoram
1989-01-01
Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail.
Influence of flowfield and vehicle parameters on engineering aerothermal methods
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Zoby, E. Vincent; Thompson, Richard A.
1989-01-01
The reliability and flexibility of three engineering codes used in the aerosphace industry (AEROHEAT, INCHES, and MINIVER) were investigated by comparing the results of these codes with Reentry F flight data and ground-test heat-transfer data for a range of cone angles, and with the predictions obtained using the detailed VSL3D code; the engineering solutions were also compared. In particular, the impact of several vehicle and flow-field parameters on the heat transfer and the capability of the engineering codes to predict these results were determined. It was found that entropy, pressure gradient, nose bluntness, gas chemistry, and angle of attack all affect heating levels. A comparison of the results of the three engineering codes with Reentry F flight data and with the predictions obtained of the VSL3D code showed a very good agreement in the regions of the applicability of the codes. It is emphasized that the parameters used in this study can significantly influence the actual heating levels and the prediction capability of a code.
Crucial nesting habitat for gunnison sage-grouse: A spatially explicit hierarchical approach
Aldridge, Cameron L.; Saher, D.J.; Childers, T.M.; Stahlnecker, K.E.; Bowen, Z.H.
2012-01-01
Gunnison sage-grouse (Centrocercus minimus) is a species of special concern and is currently considered a candidate species under Endangered Species Act. Careful management is therefore required to ensure that suitable habitat is maintained, particularly because much of the species' current distribution is faced with exurban development pressures. We assessed hierarchical nest site selection patterns of Gunnison sage-grouse inhabiting the western portion of the Gunnison Basin, Colorado, USA, at multiple spatial scales, using logistic regression-based resource selection functions. Models were selected using Akaike Information Criterion corrected for small sample sizes (AIC c) and predictive surfaces were generated using model averaged relative probabilities. Landscape-scale factors that had the most influence on nest site selection included the proportion of sagebrush cover >5%, mean productivity, and density of 2 wheel-drive roads. The landscape-scale predictive surface captured 97% of known Gunnison sage-grouse nests within the top 5 of 10 prediction bins, implicating 57% of the basin as crucial nesting habitat. Crucial habitat identified by the landscape model was used to define the extent for patch-scale modeling efforts. Patch-scale variables that had the greatest influence on nest site selection were the proportion of big sagebrush cover >10%, distance to residential development, distance to high volume paved roads, and mean productivity. This model accurately predicted independent nest locations. The unique hierarchical structure of our models more accurately captures the nested nature of habitat selection, and allowed for increased discrimination within larger landscapes of suitable habitat. We extrapolated the landscape-scale model to the entire Gunnison Basin because of conservation concerns for this species. We believe this predictive surface is a valuable tool which can be incorporated into land use and conservation planning as well the assessment of future land-use scenarios. ?? 2011 The Wildlife Society.
ERIC Educational Resources Information Center
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
Ringdal, Kjetil G; Skaga, Nils Oddvar; Hestnes, Morten; Steen, Petter Andreas; Røislien, Jo; Rehn, Marius; Røise, Olav; Krüger, Andreas J; Lossius, Hans Morten
2013-05-01
Injury severity is most frequently classified using the Abbreviated Injury Scale (AIS) as a basis for the Injury Severity Score (ISS) and the New Injury Severity Score (NISS), which are used for assessment of overall injury severity in the multiply injured patient and in outcome prediction. European trauma registries recommended the AIS 2008 edition, but the levels of inter-rater agreement and reliability of ISS and NISS, associated with its use, have not been reported. Nineteen Norwegian AIS-certified trauma registry coders were invited to score 50 real, anonymised patient medical records using AIS 2008. Rater agreements for ISS and NISS were analysed using Bland-Altman plots with 95% limits of agreement (LoA). A clinically acceptable LoA range was set at ± 9 units. Reliability was analysed using a two-way mixed model intraclass correlation coefficient (ICC) statistics with corresponding 95% confidence intervals (CI) and hierarchical agglomerative clustering. Ten coders submitted their coding results. Of their AIS codes, 2189 (61.5%) agreed with a reference standard, 1187 (31.1%) real injuries were missed, and 392 non-existing injuries were recorded. All LoAs were wider than the predefined, clinically acceptable limit of ± 9, for both ISS and NISS. The joint ICC (range) between each rater and the reference standard was 0.51 (0.29,0.86) for ISS and 0.51 (0.27,0.78) for NISS. The joint ICC (range) for inter-rater reliability was 0.49 (0.19,0.85) for ISS and 0.49 (0.16,0.82) for NISS. Univariate linear regression analyses indicated a significant relationship between the number of correctly AIS-coded injuries and total number of cases coded during the rater's career, but no significant relationship between the rater-against-reference ISS and NISS ICC values and total number of cases coded during the rater's career. Based on AIS 2008, ISS and NISS were not reliable for summarising anatomic injury severity in this study. This result indicates a limitation in their use as benchmarking tools for trauma system performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1993-01-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
Understanding phylogenies in biology: the influence of a Gestalt Perceptual Principle.
Novick, Laura R; Catley, Kefyn M
2007-12-01
Cladograms, hierarchical diagrams depicting evolutionary histories among (groups of) species, are commonly drawn in 2 informationally equivalent formats--tree and ladder. The authors hypothesize that these formats are not computationally equivalent because the Gestalt principle of good continuation obscures the hierarchical structure of ladders. Experimental results confirmed that university students (N = 44) prefer to subdivide ladders in accordance with good continuation rather than with the underlying hierarchical structure. Two subsequent experiments (N = 164) investigated cladogram understanding by examining students' ability to translate between formats (e.g., from tree to ladder). As predicted, students had greater difficulty understanding ladders than trees. This effect was larger for students with weaker backgrounds in biology. These results have important implications for evolution education reform.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.
NASA Astrophysics Data System (ADS)
Maragos, Petros
The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)
Statistical Analysis of the AIAA Drag Prediction Workshop CFD Solutions
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop (DPW), held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third AIAA Drag Prediction Workshop, held in June 2006, focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This report compares the transonic cruise prediction results of the second and third workshops using statistical analysis.
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
NASA Astrophysics Data System (ADS)
da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.
2014-05-01
Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.
Learning Hierarchical Feature Extractors for Image Recognition
2012-09-01
space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional...parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for...pooling (dotted lines) is consistently higher than average pooling (solid lines), but the gap is much less signif - icant with intersection kernel (closed
2010-02-27
investigated in more detail. The intermediate level of fidelity, though more expensive, is then used to refine the analysis , add geometric detail, and...design stage is used to further refine the analysis , narrowing the design to a handful of options. Figure 1. Integrated Hierarchical Framework. In...computational structural and computational fluid modeling. For the structural analysis tool we used McIntosh Structural Dynamics’ finite element code CNEVAL
Resource-Constrained Spatial Hot Spot Identification
2011-01-01
into three categories ( Cameron and Leitner, 2005):2 Thematic Mapping. Concentrations of events are color-coded in discrete geo- graphic areas that...of Boston burglary events in 1999 and provided by Cameron and Leitner (2005). The first map reflects burglary rates per 100,000 residents by Census...Burglary Rates, 1999 RAND A8567-22 1 0 1 2 Miles Thematic mapping Kernel density interpolation Hierarchical clustering Source: Cameron and Leitner, 2005. For
Transonic Drag Prediction on a DLR-F6 Transport Configuration Using Unstructured Grid Solvers
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Frink, N. T.; Mavriplis, D. J.; Rausch, R. D.; Milholen, W. E.
2004-01-01
A second international AIAA Drag Prediction Workshop (DPW-II) was organized and held in Orlando Florida on June 21-22, 2003. The primary purpose was to inves- tigate the code-to-code uncertainty. address the sensitivity of the drag prediction to grid size and quantify the uncertainty in predicting nacelle/pylon drag increments at a transonic cruise condition. This paper presents an in-depth analysis of the DPW-II computational results from three state-of-the-art unstructured grid Navier-Stokes flow solvers exercised on similar families of tetrahedral grids. The flow solvers are USM3D - a tetrahedral cell-centered upwind solver. FUN3D - a tetrahedral node-centered upwind solver, and NSU3D - a general element node-centered central-differenced solver. For the wingbody, the total drag predicted for a constant-lift transonic cruise condition showed a decrease in code-to-code variation with grid refinement as expected. For the same flight condition, the wing/body/nacelle/pylon total drag and the nacelle/pylon drag increment predicted showed an increase in code-to-code variation with grid refinement. Although the range in total drag for the wingbody fine grids was only 5 counts, a code-to-code comparison of surface pressures and surface restricted streamlines indicated that the three solvers were not all converging to the same flow solutions- different shock locations and separation patterns were evident. Similarly, the wing/body/nacelle/pylon solutions did not appear to be converging to the same flow solutions. Overall, grid refinement did not consistently improve the correlation with experimental data for either the wingbody or the wing/body/nacelle pylon configuration. Although the absolute values of total drag predicted by two of the solvers for the medium and fine grids did not compare well with the experiment, the incremental drag predictions were within plus or minus 3 counts of the experimental data. The correlation with experimental incremental drag was not significantly changed by specifying transition. Although the sources of code-to-code variation in force and moment predictions for the three unstructured grid codes have not yet been identified, the current study reinforces the necessity of applying multiple codes to the same application to assess uncertainty.
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
An engineer's view on genetic information and biological evolution.
Battail, Gérard
2004-01-01
We develop ideas on genome replication introduced in Battail [Europhys. Lett. 40 (1997) 343]. Starting with the hypothesis that the genome replication process uses error-correcting means, and the auxiliary one that nested codes are used to this end, we first review the concepts of redundancy and error-correcting codes. Then we show that these hypotheses imply that: distinct species exist with a hierarchical taxonomy, there is a trend of evolution towards complexity, and evolution proceeds by discrete jumps. At least the first two features above may be considered as biological facts so, in the absence of direct evidence, they provide an indirect proof in favour of the hypothesized error-correction system. The very high redundancy of genomes makes it possible. In order to explain how it is implemented, we suggest that soft codes and replication decoding, to be briefly described, are plausible candidates. Experimentally proven properties of long-range correlation of the DNA message substantiate this claim.
The Role of Hierarchy in Response Surface Modeling of Wind Tunnel Data
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2010-01-01
This paper is intended as a tutorial introduction to certain aspects of response surface modeling, for the experimentalist who has started to explore these methods as a means of improving productivity and quality in wind tunnel testing and other aerospace applications. A brief review of the productivity advantages of response surface modeling in aerospace research is followed by a description of the advantages of a common coding scheme that scales and centers independent variables. The benefits of model term reduction are reviewed. A constraint on model term reduction with coded factors is described in some detail, which requires such models to be well-formulated, or hierarchical. Examples illustrate the consequences of ignoring this constraint. The implication for automated regression model reduction procedures is discussed, and some opinions formed from the author s experience are offered on coding, model reduction, and hierarchy.
Siminoff, Laura A; Traino, Heather M; Gordon, Nahida H
2011-10-01
This study explores the effects of tissue requesters' relational, persuasive, and nonverbal communication on families' final donation decisions. One thousand sixteen (N = 1,016) requests for tissue donation were audiotaped and analyzed using the Siminoff Communication Content and Affect Program, a computer application specifically designed to code and assist with the quantitative analysis of communication data. This study supports the important role of communication strategies in health-related decision making. Families were more likely to consent to tissue donation when confirmational messages (e.g., messages that expressed validation or acceptance) or persuasive tactics such as credibility, altruism, or esteem were used during donation discussions. Consent was also more likely when family members exhibited nonverbal immediacy or disclosed private information about themselves or the patient. The results of a hierarchical log-linear regression revealed that the use of relational communication during requests directly predicted family consent. The results provide information about surrogate decision making in end-of-life situations and may be used to guide future practice in obtaining family consent to tissue donation.
The free-energy self: a predictive coding account of self-recognition.
Apps, Matthew A J; Tsakiris, Manos
2014-04-01
Recognising and representing one's self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one's body is processed in a Bayesian manner as the most likely to be "me". Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up "surprise" signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest. Copyright © 2013 Elsevier Ltd. All rights reserved.
SIMINOFF, LAURA A.; TRAINO, HEATHER M.; GORDON, NAHIDA H.
2011-01-01
This study explores the effects of tissue requesters’ relational, persuasive, and nonverbal communication on families’ final donation decisions. One thousand sixteen (N=1,016) requests for tissue donation were audiotaped and analyzed using the Siminoff Communication Content and Affect Program, a computer application specifically designed to code and assist with the quantitative analysis of communication data. This study supports the important role of communication strategies in health-related decision making. Families were more likely to consent to tissue donation when confirmational messages (e.g., messages that expressed validation or acceptance) or persuasive tactics such as credibility, altruism, or esteem were used during donation discussions. Consent was also more likely when family members exhibited nonverbal immediacy or disclosed private information about themselves or the patient. The results of a hierarchical log-linear regression revealed that the use of relational communication during requests directly predicted family consent. The results provide information about surrogate decision making in end-of-life situations and may be used to guide future practice in obtaining family consent to tissue donation. PMID:21512935
The free-energy self: A predictive coding account of self-recognition
Apps, Matthew A.J.; Tsakiris, Manos
2013-01-01
Recognising and representing one’s self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one’s body is processed in a Bayesian manner as the most likely to be “me”. Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up “surprise” signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest. PMID:23416066
A neural model of hierarchical reinforcement learning.
Rasmussen, Daniel; Voelker, Aaron; Eliasmith, Chris
2017-01-01
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain's general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model's behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.
Multi-scale, Hierarchically Nested Young Stellar Structures in LEGUS Galaxies
NASA Astrophysics Data System (ADS)
Thilker, David A.; LEGUS Team
2017-01-01
The study of star formation in galaxies has predominantly been limited to either young stellar clusters and HII regions, or much larger kpc-scale morphological features such as spiral arms. The HST Legacy ExtraGalactic UV Survey (LEGUS) provides a rare opportunity to link these scales in a diverse sample of nearby galaxies and obtain a more comprehensive understanding of their co-evolution for comparison against model predictions. We have utilized LEGUS stellar photometry to identify young, resolved stellar populations belonging to several age bins and then defined nested hierarchical structures as traced by these subsamples of stars. Analagous hierarchical structures were also defined using LEGUS catalogs of unresolved young stellar clusters. We will present our emerging results concerning the physical properties (e.g. area, star counts, stellar mass, star formation rate, ISM characteristics), occupancy statistics (e.g. clusters per substructure versus age and scale, parent/child demographics) and relation to overall galaxy morphology/mass for these building blocks of hierarchical star-forming structure.
The Utility of the MAPI in Predicting Urban Middle School Competence.
ERIC Educational Resources Information Center
Paulus, John A.; Perosa, Linda M.
A sample of 107 eighth graders from a large urban middle school in the Midwest was administered the Millon Adolescent Personality Inventory (MAPI) to determine its utility in predicting grades earned, attendance, and social competence. The results of hierarchical multiple regression analysis indicated that the MAPI coping patterns significantly…
In general, the accuracy of a predicted toxicity value increases with increase in similarity between the query chemical and the chemicals used to develop a QSAR model. A toxicity estimation methodology employing this finding has been developed. A hierarchical based clustering t...
In this article we describe an approach for predicting average hourly concentrations of ambient PM10 in Vancouver. We know our solution also applies to hourly ozone fields and believe it may be quite generally applicable. We use a hierarchal Bayesian approach. At the primary ...
Falling Off Track: How Teacher-Student Relationships Predict Early High School Failure Rates.
ERIC Educational Resources Information Center
Miller, Shazia Rafiullah
This paper examines the relationship between the climate of teacher-student relations within a school and individual student's likelihood of freshman year success. Using administrative data from the Chicago Public Schools and survey data, researchers used hierarchical linear modeling to determine whether teacher-student climate predicts students'…
NASA Astrophysics Data System (ADS)
Feng, Shou; Fu, Ping; Zheng, Wenbin
2018-03-01
Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.
Predicting U.S. food demand in the 20th century: a new look at system dynamics
NASA Astrophysics Data System (ADS)
Moorthy, Mukund; Cellier, Francois E.; LaFrance, Jeffrey T.
1998-08-01
The paper describes a new methodology for predicting the behavior of macroeconomic variables. The approach is based on System Dynamics and Fuzzy Inductive Reasoning. A four- layer pseudo-hierarchical model is proposed. The bottom layer makes predications about population dynamics, age distributions among the populace, as well as demographics. The second layer makes predications about the general state of the economy, including such variables as inflation and unemployment. The third layer makes predictions about the demand for certain goods or services, such as milk products, used cars, mobile telephones, or internet services. The fourth and top layer makes predictions about the supply of such goods and services, both in terms of their prices. Each layer can be influenced by control variables the values of which are only determined at higher levels. In this sense, the model is not strictly hierarchical. For example, the demand for goods at level three depends on the prices of these goods, which are only determined at level four. Yet, the prices are themselves influenced by the expected demand. The methodology is exemplified by means of a macroeconomic model that makes predictions about US food demand during the 20th century.
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, M. H.; Padula, S. L.
1986-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2018-06-01
The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.
Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.
Jing, X; Cimino, J J
2014-01-01
Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
CELFE/NASTRAN Code for the Analysis of Structures Subjected to High Velocity Impact
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1978-01-01
CELFE (Coupled Eulerian Lagrangian Finite Element)/NASTRAN Code three-dimensional finite element code has the capability for analyzing of structures subjected to high velocity impact. The local response is predicted by CELFE and, for large problems, the far-field impact response is predicted by NASTRAN. The coupling of the CELFE code with NASTRAN (CELFE/NASTRAN code) and the application of the code to selected three-dimensional high velocity impact problems are described.
Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D
2018-04-30
The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Barkin, Yu. V.
2003-04-01
BELT-HIERARCHIC STRUCTURE OF THE RING, SATELLITE AND PLANET SYSTEMS: PREDICTION S/2001 U1 AND OTHERS OBJECTS IN SOLAR SYSTEM Yu.V.Barkin Sternberg Astronomical Institute, Moscow, Russia, barkin@sai.msu.ru Structure regularities of the planet and satellite systems have been studied. Statistic analysis of the distribution of the major semi-axes of the orbits of the planets, comets and centaurs of the Solar system, satellite and ring systems of Jupiter, Saturn, Neptune and Uran, exoplanet systems of the pulsars PSR 1257+12, PSR 1828-11 and of the main consequence star Ups And was fulfilled. The following empirical regularities were described [1]: 1) the bodies of systems are combined into hierarchic groups and main from them combine 5 companions; 2) differences of the major semi-axes of the neighboring orbits for bodies of every group are constant; 4) for main neighboring hierarchic group these distances are distinguished in 6 times increasing to external grope; 5) the filling of the gropes and some present changes in their structure are caused by the past catastrophes in corresponding systems. The special method of reconstruction of the catastrophes which had place in the life of the Solar system (SS) was developed. Suggested method has let us to explain uniformly observed values of the major semi-axes and average values of eccentricities of the planets. In particular the Pancul’s hypothesis about Jupiter formation from two giant protoplanets (Jupiter I and Jupiter II) was confirmed. The new empirical law of the filling of the orbits of the regular groups of the planets or satellites (or rings structures) of the hierarchic ordered systems of celestial bodies was established. It was shown that sum number of bodies is proportional to the value of catastrophic value of the eccentricities which are same for first, second ,.... and fifth orbits of all gropes. The theoretical numbers of bodies for pointed orbits practically coincide with their observed numbers in main gropes of the all considered systems of celestial bodies (in Solar system and also in exoplanets systems of the pulsars PSR 1257+12, PSR 1828-11 and Ups And). Established regularities of the orbit structures let us to predict some new objects in the Solar system and in exoplanet systems. Some from them have been predicted in last years. So the new satellite of Uran (S/2001 U 1) is characterized by major semi-axis in 8 570 000 km (Minor Planet Electronic Circular, Issued 2002 Sept. 30). This satellite was predicted earlier as satellite E1 (8 640 000 km) [1]. [1] Yu.V.Barkin (2001) Electronic journal «Studied in Russia», 161, pp.1821-1830. http: // zhurnal. ape. relarn.ru/articles/2001/161.pdf.
Correlation approach to identify coding regions in DNA sequences
NASA Technical Reports Server (NTRS)
Ossadnik, S. M.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.
1994-01-01
Recently, it was observed that noncoding regions of DNA sequences possess long-range power-law correlations, whereas coding regions typically display only short-range correlations. We develop an algorithm based on this finding that enables investigators to perform a statistical analysis on long DNA sequences to locate possible coding regions. The algorithm is particularly successful in predicting the location of lengthy coding regions. For example, for the complete genome of yeast chromosome III (315,344 nucleotides), at least 82% of the predictions correspond to putative coding regions; the algorithm correctly identified all coding regions larger than 3000 nucleotides, 92% of coding regions between 2000 and 3000 nucleotides long, and 79% of coding regions between 1000 and 2000 nucleotides. The predictive ability of this new algorithm supports the claim that there is a fundamental difference in the correlation property between coding and noncoding sequences. This algorithm, which is not species-dependent, can be implemented with other techniques for rapidly and accurately locating relatively long coding regions in genomic sequences.
A new way of setting the phases for cosmological multiscale Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Jenkins, Adrian
2013-09-01
We describe how to define an extremely large discrete realization of a Gaussian white noise field that has a hierarchical structure and the property that the value of any part of the field can be computed quickly. Tiny subregions of such a field can be used to set the phase information for Gaussian initial conditions for individual cosmological simulations of structure formation. This approach has several attractive features: (i) the hierarchical structure based on an octree is particularly well suited for generating follow-up resimulation or zoom initial conditions; (ii) the phases are defined for all relevant physical scales in advance so that resimulation initial conditions are, by construction, consistent both with their parent simulation and with each other; (iii) the field can easily be made public by releasing a code to compute it - once public, phase information can be shared or published by specifying a spatial location within the realization. In this paper, we describe the principles behind creating such realizations. We define an example called Panphasia and in a companion paper by Jenkins and Booth (2013) make public a code to compute it. With 50 octree levels Panphasia spans a factor of more than 1015 in linear scale - a range that significantly exceeds the ratio of the current Hubble radius to the putative cold dark matter free-streaming scale. We show how to modify a code used for making cosmological and resimulation initial conditions so that it can take the phase information from Panphasia and, using this code, we demonstrate that it is possible to make good quality resimulation initial conditions. We define a convention for publishing phase information from Panphasia and publish the initial phases for several of the Virgo Consortium's most recent cosmological simulations including the 303 billion particle MXXL simulation. Finally, for reference, we give the locations and properties of several dark matter haloes that can be resimulated within these volumes.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.
Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang
2017-12-01
A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.
Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K
2013-03-01
Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Architecture of the parallel hierarchical network for fast image recognition
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule
2016-09-01
Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.
NASA Technical Reports Server (NTRS)
Russo, Vincent; Johnston, Gary; Campbell, Roy
1988-01-01
The programming of the interrupt handling mechanisms, process switching primitives, scheduling mechanism, and synchronization primitives of an operating system for a multiprocessor require both efficient code in order to support the needs of high- performance or real-time applications and careful organization to facilitate maintenance. Although many advantages have been claimed for object-oriented class hierarchical languages and their corresponding design methodologies, the application of these techniques to the design of the primitives within an operating system has not been widely demonstrated. To investigate the role of class hierarchical design in systems programming, the authors have constructed the Choices multiprocessor operating system architecture the C++ programming language. During the implementation, it was found that many operating system design concerns can be represented advantageously using a class hierarchical approach, including: the separation of mechanism and policy; the organization of an operating system into layers, each of which represents an abstract machine; and the notions of process and exception management. In this paper, we discuss an implementation of the low-level primitives of this system and outline the strategy by which we developed our solution.
NASA Astrophysics Data System (ADS)
Hamers, Adrian S.
2018-05-01
We extend the formalism of a previous paper to include the effects of flybys and instantaneous perturbations such as supernovae on the long-term secular evolution of hierarchical multiple systems with an arbitrary number of bodies and hierarchy, provided that the system is composed of nested binary orbits. To model secular encounters, we expand the Hamiltonian in terms of the ratio of the separation of the perturber with respect to the barycentre of the multiple system, to the separation of the widest orbit. Subsequently, we integrate over the perturber orbit numerically or analytically. We verify our method for secular encounters and illustrate it with an example. Furthermore, we describe a method to compute instantaneous orbital changes to multiple systems, such as asymmetric supernovae and impulsive encounters. The secular code, with implementation of the extensions described in this paper, is publicly available within AMUSE, and we provide a number of simple example scripts to illustrate its usage for secular and impulsive encounters and asymmetric supernovae. The extensions presented in this paper are a next step towards efficiently modelling the evolution of complex multiple systems embedded in star clusters.
An intercomparison of approaches for improving operational seasonal streamflow forecasts
NASA Astrophysics Data System (ADS)
Mendoza, Pablo A.; Wood, Andrew W.; Clark, Elizabeth; Rothwell, Eric; Clark, Martyn P.; Nijssen, Bart; Brekke, Levi D.; Arnold, Jeffrey R.
2017-07-01
For much of the last century, forecasting centers around the world have offered seasonal streamflow predictions to support water management. Recent work suggests that the two major avenues to advance seasonal predictability are improvements in the estimation of initial hydrologic conditions (IHCs) and the incorporation of climate information. This study investigates the marginal benefits of a variety of methods using IHCs and/or climate information, focusing on seasonal water supply forecasts (WSFs) in five case study watersheds located in the US Pacific Northwest region. We specify two benchmark methods that mimic standard operational approaches - statistical regression against IHCs and model-based ensemble streamflow prediction (ESP) - and then systematically intercompare WSFs across a range of lead times. Additional methods include (i) statistical techniques using climate information either from standard indices or from climate reanalysis variables and (ii) several hybrid/hierarchical approaches harnessing both land surface and climate predictability. In basins where atmospheric teleconnection signals are strong, and when watershed predictability is low, climate information alone provides considerable improvements. For those basins showing weak teleconnections, custom predictors from reanalysis fields were more effective in forecast skill than standard climate indices. ESP predictions tended to have high correlation skill but greater bias compared to other methods, and climate predictors failed to substantially improve these deficiencies within a trace weighting framework. Lower complexity techniques were competitive with more complex methods, and the hierarchical expert regression approach introduced here (hierarchical ensemble streamflow prediction - HESP) provided a robust alternative for skillful and reliable water supply forecasts at all initialization times. Three key findings from this effort are (1) objective approaches supporting methodologically consistent hindcasts open the door to a broad range of beneficial forecasting strategies; (2) the use of climate predictors can add to the seasonal forecast skill available from IHCs; and (3) sample size limitations must be handled rigorously to avoid over-trained forecast solutions. Overall, the results suggest that despite a rich, long heritage of operational use, there remain a number of compelling opportunities to improve the skill and value of seasonal streamflow predictions.
Predictive codes of familiarity and context during the perceptual learning of facial identities
NASA Astrophysics Data System (ADS)
Apps, Matthew A. J.; Tsakiris, Manos
2013-11-01
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Montes, Matthieu
2018-01-01
The Drug Design Data Resource (D3R) Grand Challenges are blind contests organized to assess the state-of-the-art methods accuracy in predicting binding modes and relative binding free energies of experimentally validated ligands for a given target. The second stage of the D3R Grand Challenge 2 (GC2) was focused on ranking 102 compounds according to their predicted affinity for Farnesoid X Receptor. In this task, our workflow was ranked 5th out of the 77 submissions in the structure-based category. Our strategy consisted in (1) a combination of molecular docking using AutoDock 4.2 and manual edition of available structures for binding poses generation using SeeSAR, (2) the use of HYDE scoring for pose selection, and (3) a hierarchical ranking using HYDE and MM/GBSA. In this report, we detail our pose generation and ligands ranking protocols and provide guidelines to be used in a prospective computer aided drug design program.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem
2017-01-01
Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.
Schiffrin, David J
2015-01-01
Some four years ago, one of the participants in this Discussion (Prof. Nicholas Kotov) predicted that: "within five years we shall see multiple examples of electronic, sensor, optical and other devices utilizing self-assembled superstructures" (N. A. Kotov, J. Mater. Chem., 2011, 21, 16673-16674). Although this prediction came partially to fruition, we have witnessed an unprecedented interest in the properties of materials at the nanoscale. The point highlighted by Kotov, however, was the importance of self-assembly of structures from well characterised building blocks to yield hierarchical structures, hopefully with predictable properties, a concept that is an everyday pursuit of synthetic chemists. This Discussion has brought together researchers from a wide range of disciplines, i.e., colloid science, modelling, nanoparticle synthesis and organisation, magnetic and optical materials, and new imaging methods, within the excellent traditional Faraday Discussion format, to discuss advances in areas relevant to the main theme of the meeting.
Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.
Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung
2007-05-01
This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.
Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.
2009-01-01
The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
Proceedings of Conference on Variable-Resolution Modeling, Washington, DC, 5-6 May 1992
1992-05-01
of powerful new computer architectures for supporting object-oriented computing. Objects, as self -contained data-code packages with orderly...another entity structure. For example, (copy-entstr e:sys- tcm ’ new -system) creates an entity structure named c:new-system that has the same structure...324 Parry, S-H. (1984): A Self -contained Hierarchical Model Construct. In: Systems Analysis and Modeling in Defense (R.K. Huber, Ed.), New York
Formal proof of the AVM-1 microprocessor using the concept of generic interpreters
NASA Technical Reports Server (NTRS)
Windley, P.; Levitt, K.; Cohen, G. C.
1991-01-01
A microprocessor designated AVM-1 was designed to demonstrate the use of generic interpreters in verifying hierarchically decomposed microprocessor specifications. This report is intended to document the high-order language (HOL) code verifying AVM-1. The organization of the proof is discussed and some technical details concerning the execution of the proof scripts in HOL are presented. The proof scripts used to verify AVM-1 are also presented.
Rewrite Systems, Pattern Matching, and Code Generation
1988-06-09
Transformations Quicn a bucn arbol se anima, buena sombra le cobija1 [Old Spanish Saying] 1 Trees arc hierarchical mathematical objects. Their...subtrees of a tree may m atch one or more rewrite rules. Traditional research in term rewrite systems is concerned with de termining if a given system...be simulated by sets of rewrite rules. Non-local condjtions are des cribed in an awkward way since the only way to transmit information is indirectly
1986-09-01
implement a computer program as a function of the Function Point Total. As shown in Table 9, the software product (referred to as SPQR ) establishes the...language being used. Source code statements are defined in SPQR as consisting of executable statements and data definitions. The factors used to calculate... SPQR is a trademark of Software Productivity Research, Inc, 233 TABLE 9 NUMBER OF COMPUTER PROGRAM SOURCE STATEMENTS PER FUNCTION POINT TOTAL
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
Predicting the Performance of an Axial-Flow Compressor
NASA Technical Reports Server (NTRS)
Steinke, R. J.
1986-01-01
Stage-stacking computer code (STGSTK) developed for predicting off-design performance of multi-stage axial-flow compressors. Code uses meanline stagestacking method. Stage and cumulative compressor performance calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. Numerous options available within code. Code developed so user modify correlations to suit their needs.
Advanced turboprop noise prediction based on recent theoretical results
NASA Technical Reports Server (NTRS)
Farassat, F.; Padula, S. L.; Dunn, M. H.
1987-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Hilgetag, C C; O'Neill, M A; Young, M P
2000-01-29
Neuroanatomists have described a large number of connections between the various structures of monkey and cat cortical sensory systems. Because of the complexity of the connection data, analysis is required to unravel what principles of organization they imply. To date, analysis of laminar origin and termination connection data to reveal hierarchical relationships between the cortical areas has been the most widely acknowledged approach. We programmed a network processor that searches for optimal hierarchical orderings of cortical areas given known hierarchical constraints and rules for their interpretation. For all cortical systems and all cost functions, the processor found a multitude of equally low-cost hierarchies. Laminar hierarchical constraints that are presently available in the anatomical literature were therefore insufficient to constrain a unique ordering for any of the sensory systems we analysed. Hierarchical orderings of the monkey visual system that have been widely reported, but which were derived by hand, were not among the optimal orderings. All the cortical systems we studied displayed a significant degree of hierarchical organization, and the anatomical constraints from the monkey visual and somato-motor systems were satisfied with very few constraint violations in the optimal hierarchies. The visual and somato-motor systems in that animal were therefore surprisingly strictly hierarchical. Most inconsistencies between the constraints and the hierarchical relationships in the optimal structures for the visual system were related to connections of area FST (fundus of superior temporal sulcus). We found that the hierarchical solutions could be further improved by assuming that FST consists of two areas, which differ in the nature of their projections. Indeed, we found that perfect hierarchical arrangements of the primate visual system, without any violation of anatomical constraints, could be obtained under two reasonable conditions, namely the subdivision of FST into two distinct areas, whose connectivity we predict, and the abolition of at least one of the less reliable rule constraints. Our analyses showed that the future collection of the same type of laminar constraints, or the inclusion of new hierarchical constraints from thalamocortical connections, will not resolve the problem of multiple optimal hierarchical representations for the primate visual system. Further data, however, may help to specify the relative ordering of some more areas. This indeterminacy of the visual hierarchy is in part due to the reported absence of some connections between cortical areas. These absences are consistent with limited cross-talk between differentiated processing streams in the system. Hence, hierarchical representation of the visual system is affected by, and must take into account, other organizational features, such as processing streams.
Aerial surveillance based on hierarchical object classification for ground target detection
NASA Astrophysics Data System (ADS)
Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo
2015-03-01
Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.
ERIC Educational Resources Information Center
Brotheridge, Celeste M.; Power, Jacqueline L.
2008-01-01
Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…
The goal of chemical toxicology research is utilizing short term bioassays and/or robust computational methods to predict in vivo toxicity endpoints for chemicals. The ToxCast program established at the US Environmental Protection Agency (EPA) is addressing this goal by using ca....
ERIC Educational Resources Information Center
Kieffer, Kevin M.; Schinka, John A.; Curtiss, Glenn
2004-01-01
This study examined the contributions of the 5-Factor Model (FFM; P. T. Costa & R. R. McCrae, 1992) and RIASEC (J. L. Holland, 1994) constructs of consistency, differentiation, and person-environment congruence in predicting job performance ratings in a large sample (N = 514) of employees. Hierarchical regression analyses conducted separately by…
ERIC Educational Resources Information Center
Darling, Nancy; Cumsille, Patricio; Loreto Martinez, M.
2007-01-01
Adolescents' agreement with parental standards and beliefs about the legitimacy of parental authority and their own obligation to obey were used to predict adolescents' obedience, controlling for parental monitoring, rules, and rule enforcement. Hierarchical linear models were used to predict both between-adolescent and within-adolescent,…
ERIC Educational Resources Information Center
Ilhan, Tahsin
2012-01-01
This study examined the predictive power of sex roles and attachment styles on loneliness. A total of 188 undergraduate students (114 female, and 74 male) from Gazi University completed the Bem Sex Role Inventory, UCLA Loneliness Scale, and Relationship Scales Questionnaire. Hierarchic Multiple Regression analysis and t-test were used to test…
ERIC Educational Resources Information Center
Edens, John F.; Ruiz, Mark A.
2006-01-01
This study examined the effects of defensive responding on the prediction of institutional misconduct among male inmates (N = 349) who completed the Personality Assessment Inventory (L. C. Morey, 1991). Hierarchical logistic regression analyses demonstrated significant main effects for the Antisocial Features (ANT) scale as well as main effects…
Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2
NASA Technical Reports Server (NTRS)
Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich
2007-01-01
This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.
Comparison of Space Shuttle Hot Gas Manifold analysis to air flow data
NASA Technical Reports Server (NTRS)
Mcconnaughey, P. K.
1988-01-01
This paper summarizes several recent analyses of the Space Shuttle Main Engine Hot Gas Manifold and compares predicted flow environments to air flow data. Codes used in these analyses include INS3D, PAGE, PHOENICS, and VAST. Both laminar (Re = 250, M = 0.30) and turbulent (Re = 1.9 million, M = 0.30) results are discussed, with the latter being compared to data for system losses, outer wall static pressures, and manifold exit Mach number profiles. Comparison of predicted results for the turbulent case to air flow data shows that the analysis using INS3D predicted system losses within 1 percent error, while the PHOENICS, PAGE, and VAST codes erred by 31, 35, and 47 percent, respectively. The INS3D, PHOENICS, and PAGE codes did a reasonable job of predicting outer wall static pressure, while the PHOENICS code predicted exit Mach number profiles with acceptable accuracy. INS3D was approximately an order of magnitude more efficient than the other codes in terms of code speed and memory requirements. In general, it is seen that complex internal flows in manifold-like geometries can be predicted with a limited degree of confidence, and further development is necessary to improve both efficiency and accuracy of codes if they are to be used as design tools for complex three-dimensional geometries.
Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading
NASA Astrophysics Data System (ADS)
Schaefer, Joseph Daniel
Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.
Visual feature extraction from voxel-weighted averaging of stimulus images in 2 fMRI studies.
Hart, Corey B; Rose, William J
2013-11-01
Multiple studies have provided evidence for distributed object representation in the brain, with several recent experiments leveraging basis function estimates for partial image reconstruction from fMRI data. Using a novel combination of statistical decomposition, generalized linear models, and stimulus averaging on previously examined image sets and Bayesian regression of recorded fMRI activity during presentation of these data sets, we identify a subset of relevant voxels that appear to code for covarying object features. Using a technique we term "voxel-weighted averaging," we isolate image filters that these voxels appear to implement. The results, though very cursory, appear to have significant implications for hierarchical and deep-learning-type approaches toward the understanding of neural coding and representation.
Light transport feature for SCINFUL.
Etaati, G R; Ghal-Eh, N
2008-03-01
An extended version of the scintillator response function prediction code SCINFUL has been developed by incorporating PHOTRACK, a Monte Carlo light transport code. Comparisons of calculated and experimental results for organic scintillators exposed to neutrons show that the extended code improves the predictive capability of SCINFUL.
Choi, Jeungok; Jenkins, Melinda L.; Cimino, James J.; White, Thomas M.; Bakken, Suzanne
2005-01-01
Objective: The authors aimed to (1) formally represent OASIS-B1 concepts using the Logical Observation Identifiers, Names, and Codes (LOINC) semantic structure; (2) demonstrate integration of OASIS-B1 concepts into a concept-oriented terminology, the Medical Entities Dictionary (MED); (3) examine potential hierarchical structures within LOINC among OASIS-B1 and other nursing terms; and (4) illustrate a Web-based implementation for OASIS-B1 data entry using Dialogix, a software tool with a set of functions that supports complex data entry. Design and Measurements: Two hundred nine OASIS-B1 items were dissected into the six elements of the LOINC semantic structure and then integrated into the MED hierarchy. Each OASIS-B1 term was matched to LOINC-coded nursing terms, Home Health Care Classification, the Omaha System, and the Sign and Symptom Check-List for Persons with HIV, and the extent of the match was judged based on a scale of 0 (no match) to 4 (exact match). OASIS-B1 terms were implemented as a Web-based survey using Dialogix. Results: Of 209 terms, 204 were successfully dissected into the elements of the LOINC semantics structure and integrated into the MED with minor revisions of MED semantics. One hundred fifty-one OASIS-B1 terms were mapped to one or more of the LOINC-coded nursing terms. Conclusion: The LOINC semantic structure offers a standard way to add home health care data to a comprehensive patient record to facilitate data sharing for monitoring outcomes across sites and to further terminology management, decision support, and accurate information retrieval for evidence-based practice. The cross-mapping results support the possibility of a hierarchical structure of the OASIS-B1 concepts within nursing terminologies in the LOINC database. PMID:15802480
Choi, Jeungok; Jenkins, Melinda L; Cimino, James J; White, Thomas M; Bakken, Suzanne
2005-01-01
The authors aimed to (1) formally represent OASIS-B1 concepts using the Logical Observation Identifiers, Names, and Codes (LOINC) semantic structure; (2) demonstrate integration of OASIS-B1 concepts into a concept-oriented terminology, the Medical Entities Dictionary (MED); (3) examine potential hierarchical structures within LOINC among OASIS-B1 and other nursing terms; and (4) illustrate a Web-based implementation for OASIS-B1 data entry using Dialogix, a software tool with a set of functions that supports complex data entry. Two hundred nine OASIS-B1 items were dissected into the six elements of the LOINC semantic structure and then integrated into the MED hierarchy. Each OASIS-B1 term was matched to LOINC-coded nursing terms, Home Health Care Classification, the Omaha System, and the Sign and Symptom Check-List for Persons with HIV, and the extent of the match was judged based on a scale of 0 (no match) to 4 (exact match). OASIS-B1 terms were implemented as a Web-based survey using Dialogix. Of 209 terms, 204 were successfully dissected into the elements of the LOINC semantics structure and integrated into the MED with minor revisions of MED semantics. One hundred fifty-one OASIS-B1 terms were mapped to one or more of the LOINC-coded nursing terms. The LOINC semantic structure offers a standard way to add home health care data to a comprehensive patient record to facilitate data sharing for monitoring outcomes across sites and to further terminology management, decision support, and accurate information retrieval for evidence-based practice. The cross-mapping results support the possibility of a hierarchical structure of the OASIS-B1 concepts within nursing terminologies in the LOINC database.
Organizational culture of a private hospital.
Vegro, Thamiris Cavazzani; Rocha, Fernanda Ludmilla Rossi; Camelo, Silvia Helena Henriques; Garcia, Alessandra Bassalobre
2016-06-01
Objective To assess the values and practices that characterize the organizational culture of a private hospital in the state of São Paulo in the perspective of nursing professionals. Methods Quantitative, descriptive, cross-sectional study. Data collection was conducted between January and March 2013 using the Brazilian Instrument for Assessing Organizational Culture. Twenty-one nurses and sixty-two nursing aides and technicians participated in the study. The responses of the participants were coded into numerical categories, generating an electronic database to be analyzed by means of the software Statistical Package for the Social Sciences. Results Scores of cooperative professionalism values (3.24); hierarchical strictness values (2.83); individual professionalism values (2.69); well-being values (2.71); external integration practices (3.73); reward and training practices (2.56); and relationship promotion practices (2.83). Conclusion In the perception of workers, despite the existence of hierarchical strictness there is cooperation at work and the institution pursues customer satisfaction and good interpersonal relationships.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
Trait-based diversification shifts reflect differential extinction among fossil taxa.
Wagner, Peter J; Estabrook, George F
2014-11-18
Evolution provides many cases of apparent shifts in diversification associated with particular anatomical traits. Three general models connect these patterns to anatomical evolution: (i) elevated net extinction of taxa bearing particular traits, (ii) elevated net speciation of taxa bearing particular traits, and (iii) elevated evolvability expanding the range of anatomies available to some species. Trait-based diversification shifts predict elevated hierarchical stratigraphic compatibility (i.e., primitive→derived→highly derived sequences) among pairs of anatomical characters. The three specific models further predict (i) early loss of diversity for taxa retaining primitive conditions (elevated net extinction), (ii) increased diversification among later members of a clade (elevated net speciation), and (iii) increased disparity among later members in a clade (elevated evolvability). Analyses of 319 anatomical and stratigraphic datasets for fossil species and genera show that hierarchical stratigraphic compatibility exceeds the expectations of trait-independent diversification in the vast majority of cases, which was expected if trait-dependent diversification shifts are common. Excess hierarchical stratigraphic compatibility correlates with early loss of diversity for groups retaining primitive conditions rather than delayed bursts of diversity or disparity across entire clades. Cambrian clades (predominantly trilobites) alone fit null expectations well. However, it is not clear whether evolution was unusual among Cambrian taxa or only early trilobites. At least among post-Cambrian taxa, these results implicate models, such as competition and extinction selectivity/resistance, as major drivers of trait-based diversification shifts at the species and genus levels while contradicting the predictions of elevated net speciation and elevated evolvability models.
Birnie, Kathryn A; Chambers, Christine T; Chorney, Jill; Fernandez, Conrad V; McGrath, Patrick J
2016-04-01
When explored separately, child and parent catastrophic thoughts about child pain show robust negative relations with child pain. The objective of this study was to conduct a dyadic analysis to elucidate intrapersonal and interpersonal influences of child and parent pain catastrophizing on aspects of pain communication, including observed behaviours and perceptions of child pain. A community sample of 171 dyads including children aged 8 to 12 years (89 girls) and parents (135 mothers) rated pain catastrophizing (trait and state versions) and child pain intensity and unpleasantness following a cold pressor task. Child pain tolerance was also assessed. Parent-child interactions during the cold pressor task were coded for parent attending, nonattending, and other talk, and child symptom complaints and other talk. Data were analyzed using the actor-partner interdependence model and hierarchical multiple regressions. Children reporting higher state pain catastrophizing had greater symptom complaints regardless of level of parent state pain catastrophizing. Children reporting low state pain catastrophizing had similar high levels of symptom complaints, but only when parents reported high state pain catastrophizing. Higher child and parent state and/or trait pain catastrophizing predicted their own ratings of higher child pain intensity and unpleasantness, with child state pain catastrophizing additionally predicting parent ratings. Higher pain tolerance was predicted by older child age and lower child state pain catastrophizing. These newly identified interpersonal effects highlight the relevance of the social context to children's pain expressions and parent perceptions of child pain. Both child and parent pain catastrophizing warrant consideration when managing child pain.
Schnek: A C++ library for the development of parallel simulation codes on regular grids
NASA Astrophysics Data System (ADS)
Schmitz, Holger
2018-05-01
A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.
Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance
Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.
2010-01-01
Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.
Characterizing the Properties of a Woven SiC/SiC Composite Using W-CEMCAN Computer Code
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.
1999-01-01
A micromechanics based computer code to predict the thermal and mechanical properties of woven ceramic matrix composites (CMC) is developed. This computer code, W-CEMCAN (Woven CEramic Matrix Composites ANalyzer), predicts the properties of two-dimensional woven CMC at any temperature and takes into account various constituent geometries and volume fractions. This computer code is used to predict the thermal and mechanical properties of an advanced CMC composed of 0/90 five-harness (5 HS) Sylramic fiber which had been chemically vapor infiltrated (CVI) with boron nitride (BN) and SiC interphase coatings and melt-infiltrated (MI) with SiC. The predictions, based on the bulk constituent properties from the literature, are compared with measured experimental data. Based on the comparison. improved or calibrated properties for the constituent materials are then developed for use by material developers/designers. The computer code is then used to predict the properties of a composite with the same constituents but with different fiber volume fractions. The predictions are compared with measured data and a good agreement is achieved.
Predicting individual brain functional connectivity using a Bayesian hierarchical model.
Dai, Tian; Guo, Ying
2017-02-15
Network-oriented analysis of functional magnetic resonance imaging (fMRI), especially resting-state fMRI, has revealed important association between abnormal connectivity and brain disorders such as schizophrenia, major depression and Alzheimer's disease. Imaging-based brain connectivity measures have become a useful tool for investigating the pathophysiology, progression and treatment response of psychiatric disorders and neurodegenerative diseases. Recent studies have started to explore the possibility of using functional neuroimaging to help predict disease progression and guide treatment selection for individual patients. These studies provide the impetus to develop statistical methodology that would help provide predictive information on disease progression-related or treatment-related changes in neural connectivity. To this end, we propose a prediction method based on Bayesian hierarchical model that uses individual's baseline fMRI scans, coupled with relevant subject characteristics, to predict the individual's future functional connectivity. A key advantage of the proposed method is that it can improve the accuracy of individualized prediction of connectivity by combining information from both group-level connectivity patterns that are common to subjects with similar characteristics as well as individual-level connectivity features that are particular to the specific subject. Furthermore, our method also offers statistical inference tools such as predictive intervals that help quantify the uncertainty or variability of the predicted outcomes. The proposed prediction method could be a useful approach to predict the changes in individual patient's brain connectivity with the progression of a disease. It can also be used to predict a patient's post-treatment brain connectivity after a specified treatment regimen. Another utility of the proposed method is that it can be applied to test-retest imaging data to develop a more reliable estimator for individual functional connectivity. We show there exists a nice connection between our proposed estimator and a recently developed shrinkage estimator of connectivity measures in the neuroimaging community. We develop an expectation-maximization (EM) algorithm for estimation of the proposed Bayesian hierarchical model. Simulations studies are performed to evaluate the accuracy of our proposed prediction methods. We illustrate the application of the methods with two data examples: the longitudinal resting-state fMRI from ADNI2 study and the test-retest fMRI data from Kirby21 study. In both the simulation studies and the fMRI data applications, we demonstrate that the proposed methods provide more accurate prediction and more reliable estimation of individual functional connectivity as compared with alternative methods. Copyright © 2017 Elsevier Inc. All rights reserved.
Mahajan, Reena; Moorman, Anne C; Liu, Stephen J; Rupp, Loralee; Klevens, R Monina
2013-05-01
With increasing use electronic health records (EHR) in the USA, we looked at the predictive values of the International Classification of Diseases, 9th revision (ICD-9) coding system for surveillance of chronic hepatitis B virus (HBV) infection. The chronic HBV cohort from the Chronic Hepatitis Cohort Study was created based on electronic health records (EHR) of adult patients who accessed services from 2006 to 2008 from four healthcare systems in the USA. Using the gold standard of abstractor review to confirm HBV cases, we calculated the sensitivity, specificity, positive and negative predictive values using one qualifying ICD-9 code versus using two qualifying ICD-9 codes separated by 6 months or greater. Of 1 652 055 adult patients, 2202 (0.1%) were confirmed as having chronic HBV. Use of one ICD-9 code had a sensitivity of 83.9%, positive predictive value of 61.0%, and specificity and negative predictive values greater than 99%. Use of two hepatitis B-specific ICD-9 codes resulted in a sensitivity of 58.4% and a positive predictive value of 89.9%. Use of one or two hepatitis B ICD-9 codes can identify cases with chronic HBV infection with varying sensitivity and positive predictive values. As the USA increases the use of EHR, surveillance using ICD-9 codes may be reliable to determine the burden of chronic HBV infection and would be useful to improve reporting by state and local health departments.
Clinical code set engineering for reusing EHR data for research: A review.
Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels
2017-06-01
The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Olier, Ivan; Springate, David A.; Ashcroft, Darren M.; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
Background The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. Methods We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. Results We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. Conclusion We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists. PMID:26918439
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
Wong, Ling L C; Villafranca, Pedro M Baiz; Menner, Angelika; Bismarck, Alexander
2013-05-21
In building construction, structural elements, such as lattice girders, are positioned specifically to support the mainframe of a building. This arrangement provides additional structural hierarchy, facilitating the transfer of load to its foundation while keeping the building weight down. We applied the same concept when synthesizing hierarchical open-celled macroporous polymers from high internal phase emulsion (HIPE) templates stabilized by varying concentrations of a polymeric non-ionic surfactant from 0.75 to 20 w/vol %. These hierarchical poly(merized)HIPEs have multimodally distributed pores, which are efficiently arranged to enhance the load transfer mechanism in the polymer foam. As a result, hierarchical polyHIPEs produced from HIPEs stabilized by 5 vol % surfactant showed a 93% improvement in Young's moduli compared to conventional polyHIPEs produced from HIPEs stabilized by 20 vol % of surfactant with the same porosity of 84%. The finite element method (FEM) was used to determine the effect of pore hierarchy on the mechanical performance of porous polymers under small periodic compressions. Results from the FEM showed a clear improvement in Young's moduli for simulated hierarchical porous geometries. This methodology could be further adapted as a predictive tool to determine the influence of hierarchy on the mechanical properties of a range of porous materials.
Huang, Chuen-Der; Lin, Chin-Teng; Pal, Nikhil Ranjan
2003-12-01
The structure classification of proteins plays a very important role in bioinformatics, since the relationships and characteristics among those known proteins can be exploited to predict the structure of new proteins. The success of a classification system depends heavily on two things: the tools being used and the features considered. For the bioinformatics applications, the role of appropriate features has not been paid adequate importance. In this investigation we use three novel ideas for multiclass protein fold classification. First, we use the gating neural network, where each input node is associated with a gate. This network can select important features in an online manner when the learning goes on. At the beginning of the training, all gates are almost closed, i.e., no feature is allowed to enter the network. Through the training, gates corresponding to good features are completely opened while gates corresponding to bad features are closed more tightly, and some gates may be partially open. The second novel idea is to use a hierarchical learning architecture (HLA). The classifier in the first level of HLA classifies the protein features into four major classes: all alpha, all beta, alpha + beta, and alpha/beta. And in the next level we have another set of classifiers, which further classifies the protein features into 27 folds. The third novel idea is to induce the indirect coding features from the amino-acid composition sequence of proteins based on the N-gram concept. This provides us with more representative and discriminative new local features of protein sequences for multiclass protein fold classification. The proposed HLA with new indirect coding features increases the protein fold classification accuracy by about 12%. Moreover, the gating neural network is found to reduce the number of features drastically. Using only half of the original features selected by the gating neural network can reach comparable test accuracy as that using all the original features. The gating mechanism also helps us to get a better insight into the folding process of proteins. For example, tracking the evolution of different gates we can find which characteristics (features) of the data are more important for the folding process. And, of course, it also reduces the computation time.
Low-Density Parity-Check (LDPC) Codes Constructed from Protographs
NASA Astrophysics Data System (ADS)
Thorpe, J.
2003-08-01
We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.
Graphs for information security control in software defined networks
NASA Astrophysics Data System (ADS)
Grusho, Alexander A.; Abaev, Pavel O.; Shorgin, Sergey Ya.; Timonina, Elena E.
2017-07-01
Information security control in software defined networks (SDN) is connected with execution of the security policy rules regulating information accesses and protection against distribution of the malicious code and harmful influences. The paper offers a representation of a security policy in the form of hierarchical structure which in case of distribution of resources for the solution of tasks defines graphs of admissible interactions in a networks. These graphs define commutation tables of switches via the SDN controller.
NASA Astrophysics Data System (ADS)
Javad Azarhoosh, Mohammad; Halladj, Rouein; Askari, Sima
2017-10-01
In this study, a new kinetic model for methanol to light olefins (MTO) reactions over a hierarchical SAPO-34 catalyst using the Langmuir-Hinshelwood-Hougen-Watson (LHHW) mechanism was presented and the kinetic parameters was obtained using a genetic algorithm (GA) and genetic programming (GP). Several kinetic models for the MTO reactions have been presented. However, due to the complexity of the reactions, most reactions are considered lumped and elementary, which cannot be deemed a completely accurate kinetic model of the process. Therefore, in this study, the LHHW mechanism is presented as kinetic models of MTO reactions. Because of the non-linearity of the kinetic models and existence of many local optimal points, evolutionary algorithms (GA and GP) are used in this study to estimate the kinetic parameters in the rate equations. Via the simultaneous connection of the code related to modelling the reactor and the GA and GP codes in the MATLAB R2013a software, optimization of the kinetic models parameters was performed such that the least difference between the results from the kinetic models and experiential results was obtained and the best kinetic parameters of MTO process reactions were achieved. A comparison of the results from the model with experiential results showed that the present model possesses good accuracy.
Iris Image Classification Based on Hierarchical Visual Codebook.
Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang
2014-06-01
Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.
A neural model of hierarchical reinforcement learning
Rasmussen, Daniel; Eliasmith, Chris
2017-01-01
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions. PMID:28683111
ERIC Educational Resources Information Center
Blickle, Gerhard; Witzki, Alexander H.; Schneider, Paula B.
2009-01-01
Career success of early employees was analyzed from a power perspective and a developmental network perspective. In a predictive field study with 112 employees mentoring support and mentors' power were assessed in the first wave, employees' networking was assessed after two years, and career success (i.e. income and hierarchical position) and…
ERIC Educational Resources Information Center
Wendt, Jillian L.; Nisbet, Deanna L.
2017-01-01
This study examined the predictive relationship among international students' sense of community, perceived learning, and end-of-course grades in computer-mediated, U.S. graduate-level courses. The community of inquiry (CoI) framework served as the theoretical foundation for the study. Step-wise hierarchical multiple regression showed no…
High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual
NASA Technical Reports Server (NTRS)
Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.
2004-01-01
This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.
A hierarchical spatial model for well yield in complex aquifers
NASA Astrophysics Data System (ADS)
Montgomery, J.; O'sullivan, F.
2017-12-01
Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
Analysis of view synthesis prediction architectures in modern coding standards
NASA Astrophysics Data System (ADS)
Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang
2013-09-01
Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.
Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.
System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less
Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan
1999-01-01
A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.
Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan
1999-01-01
A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.
2009-01-01
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6
2009-07-05
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA
Numerical predictions of EML (electromagnetic launcher) system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.
1987-01-01
The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for themore » rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.« less
Bayesian decision support for coding occupational injury data.
Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R
2016-06-01
Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
Similarity relations in visual search predict rapid visual categorization
Mohan, Krithika; Arun, S. P.
2012-01-01
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
TFaNS Tone Fan Noise Design/Prediction System. Volume 2; User's Manual; 1.4
NASA Technical Reports Server (NTRS)
Topol, David A.; Eversman, Walter
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. CUP3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides information on code input and file structure essential for potential users of TFANS. This report is divided into three volumes: Volume 1. System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume 2. User's Manual, TFANS Vers. 1.4; Volume 3. Evaluation of System Codes.
TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.
Use of statistical and neural net approaches in predicting toxicity of chemicals.
Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D
2000-01-01
Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.
Hanks, E.M.; Hooten, M.B.; Baker, F.A.
2011-01-01
Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
A flavor symmetry model for bilarge leptonic mixing and the lepton masses
NASA Astrophysics Data System (ADS)
Ohlsson, Tommy; Seidl, Gerhart
2002-11-01
We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.
Role of N-Methyl-D-Aspartate Receptors in Action-Based Predictive Coding Deficits in Schizophrenia.
Kort, Naomi S; Ford, Judith M; Roach, Brian J; Gunduz-Bruce, Handan; Krystal, John H; Jaeger, Judith; Reinhart, Robert M G; Mathalon, Daniel H
2017-03-15
Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia. In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared. N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen's d = 1.14) and schizophrenia (Cohen's d = .85). Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Shell stability analysis in a computer aided engineering (CAE) environment
NASA Technical Reports Server (NTRS)
Arbocz, J.; Hol, J. M. A. M.
1993-01-01
The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.
Semantic and visual memory codes in learning disabled readers.
Swanson, H L
1984-02-01
Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.
Neural networks for data compression and invariant image recognition
NASA Technical Reports Server (NTRS)
Gardner, Sheldon
1989-01-01
An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.
Decoding the neural mechanisms of human tool use
Gallivan, Jason P; McLean, D Adam; Valyear, Kenneth F; Culham, Jody C
2013-01-01
Sophisticated tool use is a defining characteristic of the primate species but how is it supported by the brain, particularly the human brain? Here we show, using functional MRI and pattern classification methods, that tool use is subserved by multiple distributed action-centred neural representations that are both shared with and distinct from those of the hand. In areas of frontoparietal cortex we found a common representation for planned hand- and tool-related actions. In contrast, in parietal and occipitotemporal regions implicated in hand actions and body perception we found that coding remained selectively linked to upcoming actions of the hand whereas in parietal and occipitotemporal regions implicated in tool-related processing the coding remained selectively linked to upcoming actions of the tool. The highly specialized and hierarchical nature of this coding suggests that hand- and tool-related actions are represented separately at earlier levels of sensorimotor processing before becoming integrated in frontoparietal cortex. DOI: http://dx.doi.org/10.7554/eLife.00425.001 PMID:23741616
ACSYNT - A standards-based system for parametric, computer aided conceptual design of aircraft
NASA Technical Reports Server (NTRS)
Jayaram, S.; Myklebust, A.; Gelhausen, P.
1992-01-01
A group of eight US aerospace companies together with several NASA and NAVY centers, led by NASA Ames Systems Analysis Branch, and Virginia Tech's CAD Laboratory agreed, through the assistance of Americal Technology Initiative, in 1990 to form the ACSYNT (Aircraft Synthesis) Institute. The Institute is supported by a Joint Sponsored Research Agreement to continue the research and development in computer aided conceptual design of aircraft initiated by NASA Ames Research Center and Virginia Tech's CAD Laboratory. The result of this collaboration, a feature-based, parametric computer aided aircraft conceptual design code called ACSYNT, is described. The code is based on analysis routines begun at NASA Ames in the early 1970's. ACSYNT's CAD system is based entirely on the ISO standard Programmer's Hierarchical Interactive Graphics System and is graphics-device independent. The code includes a highly interactive graphical user interface, automatically generated Hermite and B-Spline surface models, and shaded image displays. Numerous features to enhance aircraft conceptual design are described.
Calis, G; Leeuwenberg, E
1981-12-01
Coding rules can be formulated in which the shortest description of a figure-ground pattern exhibits a hierarchical structure, with the ground playing a primary and the figure a secondary role. We hypothesized that the process of perception involves and assimilation phase followed by a test phase in which the ground is tested before the figure. Experiments are described in which pairs of consecutive, superimposed patterns are presented in rapid succession, resulting in a subjective impression of seeing one pattern only. In these presentations, the second pattern introduces some deliberate distortion of the figure or ground displayed in the first pattern. Maximal distortions of the ground occur at shorter stimulus onset asynchronies than maximal distortions of the figure, suggesting that the ground codes are processed before figure codes. Moreover, patterns presenting the ground first are more likely to be perceived as ground, regardless of the distortions, than patterns presenting the figure first. This quasi masking or microgenetic approach might be relevant to theories on :mediations of immediate, or direct" perception.
Open source clustering software.
de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S
2004-06-12
We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2016-01-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions. PMID:27087704
NASA Astrophysics Data System (ADS)
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2016-05-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions.
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A; Huang, Yonggang; Zhang, Yihui
2016-05-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
Peripheral infrastructure vectors and an extended set of plant parts for the Modular Cloning system
Kretschmer, Carola; Gruetzner, Ramona; Löfke, Christian; Dagdas, Yasin; Bürstenbinder, Katharina; Marillonnet, Sylvestre
2018-01-01
Standardized DNA assembly strategies facilitate the generation of multigene constructs from collections of building blocks in plant synthetic biology. A common syntax for hierarchical DNA assembly following the Golden Gate principle employing Type IIs restriction endonucleases was recently developed, and underlies the Modular Cloning and GoldenBraid systems. In these systems, transcriptional units and/or multigene constructs are assembled from libraries of standardized building blocks, also referred to as phytobricks, in several hierarchical levels and by iterative Golden Gate reactions. Here, a toolkit containing further modules for the novel DNA assembly standards was developed. Intended for use with Modular Cloning, most modules are also compatible with GoldenBraid. Firstly, a collection of approximately 80 additional phytobricks is provided, comprising e.g. modules for inducible expression systems, promoters or epitope tags. Furthermore, DNA modules were developed for connecting Modular Cloning and Gateway cloning, either for toggling between systems or for standardized Gateway destination vector assembly. Finally, first instances of a “peripheral infrastructure” around Modular Cloning are presented: While available toolkits are designed for the assembly of plant transformation constructs, vectors were created to also use coding sequence-containing phytobricks directly in yeast two hybrid interaction or bacterial infection assays. The presented material will further enhance versatility of hierarchical DNA assembly strategies. PMID:29847550
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
This report is prepared for the demonstration of hierarchical prediction of carbon capture efficiency of a solvent-based absorption column. A computational fluid dynamics (CFD) model is first developed to simulate the core phenomena of solvent-based carbon capture, i.e., the CO2 physical absorption and chemical reaction, on a simplified geometry of wetted wall column (WWC) at bench scale. Aqueous solutions of ethanolamine (MEA) are commonly selected as a CO2 stream scrubbing liquid. CO2 is captured by both physical and chemical absorption using highly CO2 soluble and reactive solvent, MEA, during the scrubbing process. In order to provide confidence bound on themore » computational predictions of this complex engineering system, a hierarchical calibration and validation framework is proposed. The overall goal of this effort is to provide a mechanism-based predictive framework with confidence bound for overall mass transfer coefficient of the wetted wall column (WWC) with statistical analyses of the corresponding WWC experiments with increasing physical complexity.« less
Reconstruction of late Holocene climate based on tree growth and mechanistic hierarchical models
Tipton, John; Hooten, Mevin B.; Pederson, Neil; Tingley, Martin; Bishop, Daniel
2016-01-01
Reconstruction of pre-instrumental, late Holocene climate is important for understanding how climate has changed in the past and how climate might change in the future. Statistical prediction of paleoclimate from tree ring widths is challenging because tree ring widths are a one-dimensional summary of annual growth that represents a multi-dimensional set of climatic and biotic influences. We develop a Bayesian hierarchical framework using a nonlinear, biologically motivated tree ring growth model to jointly reconstruct temperature and precipitation in the Hudson Valley, New York. Using a common growth function to describe the response of a tree to climate, we allow for species-specific parameterizations of the growth response. To enable predictive backcasts, we model the climate variables with a vector autoregressive process on an annual timescale coupled with a multivariate conditional autoregressive process that accounts for temporal correlation and cross-correlation between temperature and precipitation on a monthly scale. Our multi-scale temporal model allows for flexibility in the climate response through time at different temporal scales and predicts reasonable climate scenarios given tree ring width data.
Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian
2016-01-01
With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.
User's manual for the ALS base heating prediction code, volume 2
NASA Technical Reports Server (NTRS)
Reardon, John E.; Fulton, Michael S.
1992-01-01
The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.
2010-01-01
I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2017-03-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2016-01-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437
Topological dimension tunes activity patterns in hierarchical modular networks
NASA Astrophysics Data System (ADS)
Safari, Ali; Moretti, Paolo; Muñoz, Miguel A.
2017-11-01
Connectivity patterns of relevance in neuroscience and systems biology can be encoded in hierarchical modular networks (HMNs). Recent studies highlight the role of hierarchical modular organization in shaping brain activity patterns, providing an excellent substrate to promote both segregation and integration of neural information. Here, we propose an extensive analysis of the critical spreading rate (or ‘epidemic’ threshold)—separating a phase with endemic persistent activity from one in which activity ceases—on diverse HMNs. By employing analytical and computational techniques we determine the nature of such a threshold and scrutinize how it depends on general structural features of the underlying HMN. We critically discuss the extent to which current graph-spectral methods can be applied to predict the onset of spreading in HMNs and, most importantly, we elucidate the role played by the network topological dimension as a relevant and unifying structural parameter, controlling the epidemic threshold.
Hierarchical Bayes approach for subgroup analysis.
Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C
2017-01-01
In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.
PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions
NASA Technical Reports Server (NTRS)
Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark
1990-01-01
Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.
Computer programs to predict induced effects of jets exhausting into a crossflow
NASA Technical Reports Server (NTRS)
Perkins, S. C., Jr.; Mendenhall, M. R.
1984-01-01
A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.
Implicit Coupling Approach for Simulation of Charring Carbon Ablators
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Gokcen, Tahir
2013-01-01
This study demonstrates that coupling of a material thermal response code and a flow solver with nonequilibrium gas/surface interaction for simulation of charring carbon ablators can be performed using an implicit approach. The material thermal response code used in this study is the three-dimensional version of Fully Implicit Ablation and Thermal response program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation method. Coupling between the material response and flow codes is performed by solving the surface mass balance in flow solver and the surface energy balance in material response code. Thus, the material surface recession is predicted in flow code, and the surface temperature and pyrolysis gas injection rate are computed in material response code. It is demonstrated that the time-lagged explicit approach is sufficient for simulations at low surface heating conditions, in which the surface ablation rate is not a strong function of the surface temperature. At elevated surface heating conditions, the implicit approach has to be taken, because the carbon ablation rate becomes a stiff function of the surface temperature, and thus the explicit approach appears to be inappropriate resulting in severe numerical oscillations of predicted surface temperature. Implicit coupling for simulation of arc-jet models is performed, and the predictions are compared with measured data. Implicit coupling for trajectory based simulation of Stardust fore-body heat shield is also conducted. The predicted stagnation point total recession is compared with that predicted using the chemical equilibrium surface assumption
Models for Multidimensional Tests and Hierarchically Structured Training Materials.
1985-05-01
NAVOP 01B7 Washington, DC 20370 Dr. Hans Crombag University of Leyden Mr. Raymond E. Christal Education Research Center AFHRL/MOE Boerhaavelaan 2 Brooks...AFB, TX 78235 2334 EN Leyden The NETHERLANDS Dr. Norman Cliff Department of Psychology CTB/McGraw-Hill Library Univ. of So. Californ.a 2500 Garden Road...Diego, CA 92152 A[USTRALIA Ms. Kathleen Moreno Dr. William L. Maloy (02) Navy Personnel R&D Center Chief of Naval Education Code 62 and Training San Diego
A mean field neural network for hierarchical module placement
NASA Technical Reports Server (NTRS)
Unaltuna, M. Kemal; Pitchumani, Vijay
1992-01-01
This paper proposes a mean field neural network for the two-dimensional module placement problem. An efficient coding scheme with only O(N log N) neurons is employed where N is the number of modules. The neurons are evolved in groups of N in log N iteration steps such that the circuit is recursively partitioned in alternating vertical and horizontal directions. In our simulations, the network was able to find optimal solutions to all test problems with up to 128 modules.
2016-12-08
mesoscopic models of interfaces and interphases, and microstructure-resolved representative volume element simulations. Atomic simulations were...title and subtitle with volume number and part number, if applicable. On classified documents, enter the title classification in parentheses. 5a...careful prediction of the pressure- volume -temperature equation of state, pressure- and temperature-dependent crystal and liquid thermal and transport
ERIC Educational Resources Information Center
Ying, Yu-Wen; Han, Meekyung
2008-01-01
The study examined variation in the prediction of adjustment in Taiwanese students by ethnic density. A total of 155 Taiwanese students were assessed via survey pre-departure and three times post-arrival in the United States. Hierarchical regression analysis showed students on campuses with fewer other Taiwanese peers formed more friendships with…
Analysis and recognition of 5′ UTR intron splice sites in human pre-mRNA
Eden, E.; Brunak, S.
2004-01-01
Prediction of splice sites in non-coding regions of genes is one of the most challenging aspects of gene structure recognition. We perform a rigorous analysis of such splice sites embedded in human 5′ untranslated regions (UTRs), and investigate correlations between this class of splice sites and other features found in the adjacent exons and introns. By restricting the training of neural network algorithms to ‘pure’ UTRs (not extending partially into protein coding regions), we for the first time investigate the predictive power of the splicing signal proper, in contrast to conventional splice site prediction, which typically relies on the change in sequence at the transition from protein coding to non-coding. By doing so, the algorithms were able to pick up subtler splicing signals that were otherwise masked by ‘coding’ noise, thus enhancing significantly the prediction of 5′ UTR splice sites. For example, the non-coding splice site predicting networks pick up compositional and positional bias in the 3′ ends of non-coding exons and 5′ non-coding intron ends, where cytosine and guanine are over-represented. This compositional bias at the true UTR donor sites is also visible in the synaptic weights of the neural networks trained to identify UTR donor sites. Conventional splice site prediction methods perform poorly in UTRs because the reading frame pattern is absent. The NetUTR method presented here performs 2–3-fold better compared with NetGene2 and GenScan in 5′ UTRs. We also tested the 5′ UTR trained method on protein coding regions, and discovered, surprisingly, that it works quite well (although it cannot compete with NetGene2). This indicates that the local splicing pattern in UTRs and coding regions is largely the same. The NetUTR method is made publicly available at www.cbs.dtu.dk/services/NetUTR. PMID:14960723
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Trait-based diversification shifts reflect differential extinction among fossil taxa
Wagner, Peter J.; Estabrook, George F.
2014-01-01
Evolution provides many cases of apparent shifts in diversification associated with particular anatomical traits. Three general models connect these patterns to anatomical evolution: (i) elevated net extinction of taxa bearing particular traits, (ii) elevated net speciation of taxa bearing particular traits, and (iii) elevated evolvability expanding the range of anatomies available to some species. Trait-based diversification shifts predict elevated hierarchical stratigraphic compatibility (i.e., primitive→derived→highly derived sequences) among pairs of anatomical characters. The three specific models further predict (i) early loss of diversity for taxa retaining primitive conditions (elevated net extinction), (ii) increased diversification among later members of a clade (elevated net speciation), and (iii) increased disparity among later members in a clade (elevated evolvability). Analyses of 319 anatomical and stratigraphic datasets for fossil species and genera show that hierarchical stratigraphic compatibility exceeds the expectations of trait-independent diversification in the vast majority of cases, which was expected if trait-dependent diversification shifts are common. Excess hierarchical stratigraphic compatibility correlates with early loss of diversity for groups retaining primitive conditions rather than delayed bursts of diversity or disparity across entire clades. Cambrian clades (predominantly trilobites) alone fit null expectations well. However, it is not clear whether evolution was unusual among Cambrian taxa or only early trilobites. At least among post-Cambrian taxa, these results implicate models, such as competition and extinction selectivity/resistance, as major drivers of trait-based diversification shifts at the species and genus levels while contradicting the predictions of elevated net speciation and elevated evolvability models. PMID:25331898
Brain networks for confidence weighting and hierarchical inference during probabilistic learning.
Meyniel, Florent; Dehaene, Stanislas
2017-05-09
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning
Meyniel, Florent; Dehaene, Stanislas
2017-01-01
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences. PMID:28439014
NASA Astrophysics Data System (ADS)
Ghiringhelli, Luca M.; Carbogno, Christian; Levchenko, Sergey; Mohamed, Fawzi; Huhs, Georg; Lüders, Martin; Oliveira, Micael; Scheffler, Matthias
2017-11-01
With big-data driven materials research, the new paradigm of materials science, sharing and wide accessibility of data are becoming crucial aspects. Obviously, a prerequisite for data exchange and big-data analytics is standardization, which means using consistent and unique conventions for, e.g., units, zero base lines, and file formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community, which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops "converters" for the input and output files of all important codes. These converters then translate the data of each code into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. In this perspective paper, we present both strategies and argue that they can and should be regarded as complementary, if not even synergetic. The represented appropriate format and conventions were agreed upon by two teams, the Electronic Structure Library (ESL) of the European Center for Atomic and Molecular Computations (CECAM) and the NOvel MAterials Discovery (NOMAD) Laboratory, a European Centre of Excellence (CoE). A key element of this work is the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.
Community Alcohol Outlet Density and Underage Drinking
Chen, Meng-Jinn; Grube, Joel W.; Gruenewald, Paul J.
2009-01-01
Aim This study examined how community alcohol outlet density may be associated with drinking among youths. Methods Longitudinal data were collected from 1091 adolescents (aged 14–16 at baseline) recruited from 50 zip codes in California with varying levels of alcohol outlet density and median household income. Hierarchical linear models were used to examine the associations between zip code alcohol outlet density and frequency rates of general alcohol use and excessive drinking, taking into account zip code median household income and individual-level variables (age, gender, race/ethnicity, personal income, mobility, and perceived drinking by parents and peers). Findings When all other factors were controlled, higher initial levels of drinking and excessive drinking were observed among youths residing in zip codes with higher alcohol outlet densities. Growth in drinking and excessive drinking was on average more rapid in zip codes with lower alcohol outlet densities. The relation of zip code alcohol outlet density with drinking appeared to be mitigated by having friends with access to a car. Conclusion Alcohol outlet density may play a significant role in initiation of underage drinking during early teen ages, especially when youths have limited mobility. Youth who reside in areas with low alcohol outlet density may overcome geographic constraints through social networks that increase their mobility and the ability to seek alcohol and drinking opportunities beyond the local community. PMID:20078485
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
NASA Astrophysics Data System (ADS)
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
Numerical simulation of experiments in the Giant Planet Facility
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.
1979-01-01
Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.
A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins
Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predic...
Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C
2014-04-01
To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Assessment of intercoder agreement. Community Australian football. 1082 standardised injury surveillance records. Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific 'X' code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise.
A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals
NASA Technical Reports Server (NTRS)
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1994-01-01
Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions
Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.
2014-01-01
Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
Lemche, Erwin; Joraschky, Peter; Klann-Delius, Gisela
2013-12-01
In a longitudinal natural language development study in Germany, the acquisition of verbal symbols for present persons, absent persons, inanimate things and the mother-toddler dyad was investigated. Following the notion that verbal referent use is more developed in ostensive contexts, symbolic play situations were coded for verbal person reference by means of noun and pronoun use. Depending on attachment classifications at twelve months of age, effects of attachment classification and maternal language input were studied up to 36 months in four time points. Hierarchical regression analyses revealed that, except for mother absence, maternal verbal referent input rates at 17 and 36 months were stronger predictors for all referent types than any of the attachment organizations, or any other social or biological predictor variable. Attachment effects accounted for up to 9.8% of unique variance proportions in the person reference variables. Perinatal and familial measures predicted person references dependent on reference type. The results of this investigation indicate that mother-reference, self-reference and thing-reference develop in similar quantities measured from the 17-month time point, but are dependent of attachment quality. Copyright © 2013 Elsevier Inc. All rights reserved.
Nonlinear Response and Residual Strength of Damaged Stiffened Shells Subjected to Combined Loads
NASA Technical Reports Server (NTRS)
Starnes, James H., Jr.; Britt, Vicki O.; Rose, Cheryl A.; Rankin, Charles C.
1996-01-01
The results of an analytical study of the nonlinear response of stiffened fuselage shells with long cracks are presented. The shells are modeled with a hierarchical modeling strategy and analyzed with a nonlinear shell analysis code that maintains the shell in a nonlinear equilibrium state while the crack is grown. The analysis accurately accounts for global and local structural response phenomena. Fuselage skins, frames stringers and failsafe straps are included in the models. Results are presented for various combinations of internal pressure and mechanical bending, vertical shear and torsion loads, and the effects of crack orientation and location on the shell response are described. These results indicate that the nonlinear interaction between the in-plane stress resultants and the out-of-plane displacements near a crack can significantly affect the structural response of the shell, and the stress-intensity factors associated with a crack that are used to predict residual strength. The effects of representative combined loading conditions on the stress-intensity factors associated with a crack are presented. The effects of varying structural parameters on the stress-intensity factors associated with a crack, and on self-similar and non-self-similar crack-growth are also presented.
TAS: A Transonic Aircraft/Store flow field prediction code
NASA Technical Reports Server (NTRS)
Thompson, D. S.
1983-01-01
A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.
Modification of codes NUALGAM and BREMRAD, Volume 1
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Huang, R.; Firstenberg, H.
1971-01-01
The NUGAM2 code predicts forward and backward angular energy differential and integrated distributions for gamma photons and fluorescent radiation emerging from finite laminar transport media. It determines buildup and albedo data for scientific research and engineering purposes; it also predicts the emission characteristics of finite radioisotope sources. The results are shown to be in very good agreement with available published data. The code predicts data for many situations in which no published data is available in the energy range up to 5 MeV. The NUGAM3 code predicts the pulse height response of inorganic (NaI and CsI) scintillation detectors to gamma photons. Because it allows the scintillator to be clad and mounted on a photomultiplier as in the experimental or industrial application, it is a more practical and thus useful code than others previously reported. Results are in excellent agreement with published Monte Carlo and experimental data in the energy range up to 4.5 MeV.
Dream to Predict? REM Dreaming as Prospective Coding
Llewellyn, Sue
2016-01-01
The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies—through expectations derived from discerning patterns in associated past experiences—would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus—without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning—within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate—another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a predictive code. A particular dream illustrates. PMID:26779078
Advanced propeller noise prediction in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, M. H.; Spence, P. L.
1992-01-01
The time domain code ASSPIN gives acousticians a powerful technique of advanced propeller noise prediction. Except for nonlinear effects, the code uses exact solutions of the Ffowcs Williams-Hawkings equation with exact blade geometry and kinematics. By including nonaxial inflow, periodic loading noise, and adaptive time steps to accelerate computer execution, the development of this code becomes complete.
Quantifying the Hierarchical Order in Self-Aligned Carbon Nanotubes from Atomic to Micrometer Scale.
Meshot, Eric R; Zwissler, Darwin W; Bui, Ngoc; Kuykendall, Tevye R; Wang, Cheng; Hexemer, Alexander; Wu, Kuang Jen J; Fornasiero, Francesco
2017-06-27
Fundamental understanding of structure-property relationships in hierarchically organized nanostructures is crucial for the development of new functionality, yet quantifying structure across multiple length scales is challenging. In this work, we used nondestructive X-ray scattering to quantitatively map the multiscale structure of hierarchically self-organized carbon nanotube (CNT) "forests" across 4 orders of magnitude in length scale, from 2.0 Å to 1.5 μm. Fully resolved structural features include the graphitic honeycomb lattice and interlayer walls (atomic), CNT diameter (nano), as well as the greater CNT ensemble (meso) and large corrugations (micro). Correlating orientational order across hierarchical levels revealed a cascading decrease as we probed finer structural feature sizes with enhanced sensitivity to small-scale disorder. Furthermore, we established qualitative relationships for single-, few-, and multiwall CNT forest characteristics, showing that multiscale orientational order is directly correlated with number density spanning 10 9 -10 12 cm -2 , yet order is inversely proportional to CNT diameter, number of walls, and atomic defects. Lastly, we captured and quantified ultralow-q meridional scattering features and built a phenomenological model of the large-scale CNT forest morphology, which predicted and confirmed that these features arise due to microscale corrugations along the vertical forest direction. Providing detailed structural information at multiple length scales is important for design and synthesis of CNT materials as well as other hierarchically organized nanostructures.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Hsu, Arthur L; Tang, Sen-Lin; Halgamuge, Saman K
2003-11-01
Current Self-Organizing Maps (SOMs) approaches to gene expression pattern clustering require the user to predefine the number of clusters likely to be expected. Hierarchical clustering methods used in this area do not provide unique partitioning of data. We describe an unsupervised dynamic hierarchical self-organizing approach, which suggests an appropriate number of clusters, to perform class discovery and marker gene identification in microarray data. In the process of class discovery, the proposed algorithm identifies corresponding sets of predictor genes that best distinguish one class from other classes. The approach integrates merits of hierarchical clustering with robustness against noise known from self-organizing approaches. The proposed algorithm applied to DNA microarray data sets of two types of cancers has demonstrated its ability to produce the most suitable number of clusters. Further, the corresponding marker genes identified through the unsupervised algorithm also have a strong biological relationship to the specific cancer class. The algorithm tested on leukemia microarray data, which contains three leukemia types, was able to determine three major and one minor cluster. Prediction models built for the four clusters indicate that the prediction strength for the smaller cluster is generally low, therefore labelled as uncertain cluster. Further analysis shows that the uncertain cluster can be subdivided further, and the subdivisions are related to two of the original clusters. Another test performed using colon cancer microarray data has automatically derived two clusters, which is consistent with the number of classes in data (cancerous and normal). JAVA software of dynamic SOM tree algorithm is available upon request for academic use. A comparison of rectangular and hexagonal topologies for GSOM is available from http://www.mame.mu.oz.au/mechatronics/journalinfo/Hsu2003supp.pdf
Multiscale solute transport upscaling for a three-dimensional hierarchical porous medium
NASA Astrophysics Data System (ADS)
Zhang, Mingkan; Zhang, Ye
2015-03-01
A laboratory-generated hierarchical, fully heterogeneous aquifer model (FHM) provides a reference for developing and testing an upscaling approach that integrates large-scale connectivity mapping with flow and transport modeling. Based on the FHM, three hydrostratigraphic models (HSMs) that capture lithological (static) connectivity at different resolutions are created, each corresponding to a sedimentary hierarchy. Under increasing system lnK variances (0.1, 1.0, 4.5), flow upscaling is first conducted to calculate equivalent hydraulic conductivity for individual connectivity (or unit) of the HSMs. Given the computed flow fields, an instantaneous, conservative tracer test is simulated by all models. For the HSMs, two upscaling formulations are tested based on the advection-dispersion equation (ADE), implementing space versus time-dependent macrodispersivity. Comparing flow and transport predictions of the HSMs against those of the reference model, HSMs capturing connectivity at increasing resolutions are more accurate, although upscaling errors increase with system variance. Results suggest: (1) by explicitly modeling connectivity, an enhanced degree of freedom in representing dispersion can improve the ADE-based upscaled models by capturing non-Fickian transport of the FHM; (2) when connectivity is sufficiently resolved, the type of data conditioning used to model transport becomes less critical. Data conditioning, however, is influenced by the prediction goal; (3) when aquifer is weakly-to-moderately heterogeneous, the upscaled models adequately capture the transport simulation of the FHM, despite the existence of hierarchical heterogeneity at smaller scales. When aquifer is strongly heterogeneous, the upscaled models become less accurate because lithological connectivity cannot adequately capture preferential flows; (4) three-dimensional transport connectivities of the hierarchical aquifer differ quantitatively from those analyzed for two-dimensional systems. This article was corrected on 7 MAY 2015. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Western, A. W.; Lintern, A.; Liu, S.; Ryu, D.; Webb, J. A.; Leahy, P.; Wilson, P.; Waters, D.; Bende-Michl, U.; Watson, M.
2016-12-01
Many streams, lakes and estuaries are experiencing increasing concentrations and loads of nutrient and sediments. Models that can predict the spatial and temporal variability in water quality of aquatic systems are required to help guide the management and restoration of polluted aquatic systems. We propose that a Bayesian hierarchical modelling framework could be used to predict water quality responses over varying spatial and temporal scales. Stream water quality data and spatial data of catchment characteristics collected throughout Victoria and Queensland (in Australia) over two decades will be used to develop this Bayesian hierarchical model. In this paper, we present the preliminary exploratory data analysis required for the development of the Bayesian hierarchical model. Specifically, we present the results of exploratory data analysis of Total Nitrogen (TN) concentrations in rivers in Victoria (in South-East Australia) to illustrate the catchment characteristics that appear to be influencing spatial variability in (1) mean concentrations of TN; and (2) the relationship between discharge and TN throughout the state. These important catchment characteristics were identified using: (1) monthly TN concentrations measured at 28 water quality gauging stations and (2) climate, land use, topographic and geologic characteristics of the catchments of these 28 sites. Spatial variability in TN concentrations had a positive correlation to fertiliser use in the catchment and average temperature. There were negative correlations between TN concentrations and catchment forest cover, annual runoff, runoff perenniality, soil erosivity and catchment slope. The relationship between discharge and TN concentrations showed spatial variability, possibly resulting from climatic and topographic differences between the sites. The results of this study will feed into the hierarchical Bayesian model of river water quality.
NASA Tech Briefs, September 2006
NASA Technical Reports Server (NTRS)
2006-01-01
Topics covered include: Improving Thermomechanical Properties of SiC/SiC Composites; Aerogel/Particle Composites for Thermoelectric Devices; Patches for Repairing Ceramics and Ceramic- Matrix Composites; Lower-Conductivity Ceramic Materials for Thermal-Barrier Coatings; An Alternative for Emergency Preemption of Traffic Lights; Vehicle Transponder for Preemption of Traffic Lights; Automated Announcements of Approaching Emergency Vehicles; Intersection Monitor for Traffic-Light-Preemption System; Full-Duplex Digital Communication on a Single Laser Beam; Stabilizing Microwave Frequency of a Photonic Oscillator; Microwave Oscillators Based on Nonlinear WGM Resonators; Pointing Reference Scheme for Free-Space Optical Communications Systems; High-Level Performance Modeling of SAR Systems; Spectral Analysis Tool 6.2 for Windows; Multi-Platform Avionics Simulator; Silicon-Based Optical Modulator with Ferroelectric Layer; Multiplexing Transducers Based on Tunnel-Diode Oscillators; Scheduling with Automated Resolution of Conflicts; Symbolic Constraint Maintenance Grid; Discerning Trends in Performance Across Multiple Events; Magnetic Field Solver; Computing for Aiming a Spaceborne Bistatic- Radar Transmitter; 4-Vinyl-1,3-Dioxolane-2-One as an Additive for Li-Ion Cells; Probabilistic Prediction of Lifetimes of Ceramic Parts; STRANAL-PMC Version 2.0; Micromechanics and Piezo Enhancements of HyperSizer; Single-Phase Rare-Earth Oxide/Aluminum Oxide Glasses; Tilt/Tip/Piston Manipulator with Base-Mounted Actuators; Measurement of Model Noise in a Hard-Wall Wind Tunnel; Loci-STREAM Version 0.9; The Synergistic Engineering Environment; Reconfigurable Software for Controlling Formation Flying; More About the Tetrahedral Unstructured Software System; Computing Flows Using Chimera and Unstructured Grids; Avoiding Obstructions in Aiming a High-Gain Antenna; Analyzing Aeroelastic Stability of a Tilt-Rotor Aircraft; Tracking Positions and Attitudes of Mars Rovers; Stochastic Evolutionary Algorithms for Planning Robot Paths; Compressible Flow Toolbox; Rapid Aeroelastic Analysis of Blade Flutter in Turbomachines; General Flow-Solver Code for Turbomachinery Applications; Code for Multiblock CFD and Heat-Transfer Computations; Rotating-Pump Design Code; Covering a Crucible with Metal Containing Channels; Repairing Fractured Bones by Use of Bioabsorbable Composites; Kalman Filter for Calibrating a Telescope Focal Plane; Electronic Absolute Cartesian Autocollimator; Fiber-Optic Gratings for Lidar Measurements of Water Vapor; Simulating Responses of Gravitational-Wave Instrumentation; SOFTC: A Software Correlator for VLBI; Progress in Computational Simulation of Earthquakes; Database of Properties of Meteors; Computing Spacecraft Solar-Cell Damage by Charged Particles; Thermal Model of a Current-Carrying Wire in a Vacuum; Program for Analyzing Flows in a Complex Network; Program Predicts Performance of Optical Parametric Oscillators; Processing TES Level-1B Data; Automated Camera Calibration; Tracking the Martian CO2 Polar Ice Caps in Infrared Images; Processing TES Level-2 Data; SmaggIce Version 1.8; Solving the Swath Segment Selection Problem; The Spatial Standard Observer; Less-Complex Method of Classifying MPSK; Improvement in Recursive Hierarchical Segmentation of Data; Using Heaps in Recursive Hierarchical Segmentation of Data; Tool for Statistical Analysis and Display of Landing Sites; Automated Assignment of Proposals to Reviewers; Array-Pattern-Match Compiler for Opportunistic Data Analysis; Pre-Processor for Compression of Multispectral Image Data; Compressing Image Data While Limiting the Effects of Data Losses; Flight Operations Analysis Tool; Improvement in Visual Target Tracking for a Mobile Robot; Software for Simulating Air Traffic; Automated Vectorization of Decision-Based Algorithms; Grayscale Optical Correlator Workbench; "One-Stop Shopping" for Ocean Remote-Sensing and Model Data; State Analysis Database Tool; Generating CAHV and CAHVOmages with Shadows in ROAMS; Improving UDP/IP Transmission Without Increasing Congestion; FORTRAN Versions of Reformulated HFGMC Codes; Program for Editing Spacecraft Command Sequences; Flight-Tested Prototype of BEAM Software; Mission Scenario Development Workbench; Marsviewer; Tool for Analysis and Reduction of Scientific Data; ASPEN Version 3.0; Secure Display of Space-Exploration Images; Digital Front End for Wide-Band VLBI Science Receiver; Multifunctional Tanks for Spacecraft; Lightweight, Segmented, Mostly Silicon Telescope Mirror; Assistant for Analyzing Tropical-Rain-Mapping Radar Data; and Anion-Intercalating Cathodes for High-Energy- Density Cells.
The value of prior knowledge in machine learning of complex network systems.
Ferranti, Dana; Krane, David; Craft, David
2017-11-15
Our overall goal is to develop machine-learning approaches based on genomics and other relevant accessible information for use in predicting how a patient will respond to a given proposed drug or treatment. Given the complexity of this problem, we begin by developing, testing and analyzing learning methods using data from simulated systems, which allows us access to a known ground truth. We examine the benefits of using prior system knowledge and investigate how learning accuracy depends on various system parameters as well as the amount of training data available. The simulations are based on Boolean networks-directed graphs with 0/1 node states and logical node update rules-which are the simplest computational systems that can mimic the dynamic behavior of cellular systems. Boolean networks can be generated and simulated at scale, have complex yet cyclical dynamics and as such provide a useful framework for developing machine-learning algorithms for modular and hierarchical networks such as biological systems in general and cancer in particular. We demonstrate that utilizing prior knowledge (in the form of network connectivity information), without detailed state equations, greatly increases the power of machine-learning algorithms to predict network steady-state node values ('phenotypes') and perturbation responses ('drug effects'). Links to codes and datasets here: https://gray.mgh.harvard.edu/people-directory/71-david-craft-phd. dcraft@broadinstitute.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Data Quality Screening Service
NASA Technical Reports Server (NTRS)
Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan
2013-01-01
A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.
Single stock dynamics on high-frequency data: from a compressed coding perspective.
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.
Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235
The NASA-LeRC wind turbine sound prediction code
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1981-01-01
Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.
Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne
2003-01-01
A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.
Transfer Student Success: Educationally Purposeful Activities Predictive of Undergraduate GPA
ERIC Educational Resources Information Center
Fauria, Renee M.; Fuller, Matthew B.
2015-01-01
Researchers evaluated the effects of Educationally Purposeful Activities (EPAs) on transfer and nontransfer students' cumulative GPAs. Hierarchical, linear, and multiple regression models yielded seven statistically significant educationally purposeful items that influenced undergraduate student GPAs. Statistically significant positive EPAs for…
Internal Accountability and District Achievement: How Superintendents Affect Student Learning
ERIC Educational Resources Information Center
Hough, Kimberly L.
2014-01-01
This quantitative survey study was designed to determine whether superintendent accountability behaviors or agreement about accountability behaviors between superintendents and their subordinate central office administrators predicted district student achievement. Hierarchical multiple regression and analyses of covariance were employed,…
Practical Assessment, Research & Evaluation, 2000-2001.
ERIC Educational Resources Information Center
Rudner, Lawrence M., Ed.; Schafer, William D., Ed.
2001-01-01
This document consists of papers published in the electronic journal "Practical Assessment, Research & Evaluation" during 2000-2001: (1) "Advantages of Hierarchical Linear Modeling" (Jason W. Osborne); (2) "Prediction in Multiple Regression" (Jason W. Osborne); (3) Scoring Rubrics: What, When, and How?"…
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
Image Search Reranking With Hierarchical Topic Awareness.
Tian, Xinmei; Yang, Linjun; Lu, Yijuan; Tian, Qi; Tao, Dacheng
2015-10-01
With much attention from both academia and industrial communities, visual search reranking has recently been proposed to refine image search results obtained from text-based image search engines. Most of the traditional reranking methods cannot capture both relevance and diversity of the search results at the same time. Or they ignore the hierarchical topic structure of search result. Each topic is treated equally and independently. However, in real applications, images returned for certain queries are naturally in hierarchical organization, rather than simple parallel relation. In this paper, a new reranking method "topic-aware reranking (TARerank)" is proposed. TARerank describes the hierarchical topic structure of search results in one model, and seamlessly captures both relevance and diversity of the image search results simultaneously. Through a structured learning framework, relevance and diversity are modeled in TARerank by a set of carefully designed features, and then the model is learned from human-labeled training samples. The learned model is expected to predict reranking results with high relevance and diversity for testing queries. To verify the effectiveness of the proposed method, we collect an image search dataset and conduct comparison experiments on it. The experimental results demonstrate that the proposed TARerank outperforms the existing relevance-based and diversified reranking methods.
An analytics approach to designing patient centered medical homes.
Ajorlou, Saeede; Shams, Issac; Yang, Kai
2015-03-01
Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non-continuity of care will eliminate many advantages of the medical home strategy. In this paper, by using the hierarchical generalized linear model with multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model.
Gore, K L; Carter, M M; Parker, S
2002-06-01
Trait anxiety is believed to be a hierarchical construct composed of several lower-order factors (Adv. Behav. Res. Therapy, 15 (1993) 147; J. Anxiety Disorders, 9 (1995) 163). Assessment devices such as the Social Interaction Anxiety Scale, the Social Phobia Scale (SIAS and SPS; Behav. Res. Therapy, 36 (4) (1998) 455), and the Anxiety Sensitivity Index (ASI; Behav. Res. Therapy, 24 (1986) 1) are good measures of the presumably separate lower-order factors. This study compared the effectiveness of the SIAS, SPS, ASI-physical scale and STAI-T (State-Trait Anxiety Inventory. Palo Alto, CA: Consulting Psychologists Press (1970)) as predictors of anxious response to a social challenge (asking an aloof confederate out on a date). Consistent with the hierarchical model of anxiety, the measures of trait anxiety were moderately correlated with each other and each was a significant predictor of anxious response. The specific measures of trait social anxiety were slightly better predictors of anxious response to the social challenge than was either the ASI-physical scale or the STAI-T. The results provide evidence of the predictive validity of these social trait measures and some support for their specificity in the prediction of anxious response to a social challenge.
Irwin, Jessica L; Beeghly, Marjorie; Rosenblum, Katherine L; Muzik, Maria
2016-12-01
The postpartum period brings a host of biopsychosocial, familial, and economic changes, which may be challenging for new mothers, especially those with trauma histories. Trauma-exposed women are at heightened risk for psychiatric symptomatology and reduced quality of life. The current study sought to evaluate whether a set of hypothesized promotive factors assessed during the first 18 months postpartum (positive parenting, family cohesion, and maternal resilience) are associated with life satisfaction in this population, after controlling for income and postpartum psychiatric symptoms. Analyses were based on data collected for 266 mother-infant dyads from a longitudinal cohort study, Maternal Anxiety during the Childbearing Years (MACY), of women oversampled for childhood maltreatment history. Hierarchical linear regression was used to evaluate the study hypotheses. Consistent with prior work, greater postpartum psychiatric symptoms and less income predicted poor perceptions of life quality. In hierarchical regressions controlling for income and psychiatric symptoms, positive parenting and family cohesion predicted unique variance in mothers' positive perceptions of life quality, and resilience was predictive beyond all other factors. Factors from multiple levels of analysis (maternal, dyadic, and familial) may serve as promotive factors predicting positive perceptions of life quality among women with childhood trauma histories, even those struggling with high levels of psychiatric or economic distress.
Predicted seafloor facies of Central Santa Monica Bay, California
Dartnell, Peter; Gardner, James V.
2004-01-01
Summary -- Mapping surficial seafloor facies (sand, silt, muddy sand, rock, etc.) should be the first step in marine geological studies and is crucial when modeling sediment processes, pollution transport, deciphering tectonics, and defining benthic habitats. This report outlines an empirical technique that predicts the distribution of seafloor facies for a large area offshore Los Angeles, CA using high-resolution bathymetry and co-registered, calibrated backscatter from multibeam echosounders (MBES) correlated to ground-truth sediment samples. The technique uses a series of procedures that involve supervised classification and a hierarchical decision tree classification that are now available in advanced image-analysis software packages. Derivative variance images of both bathymetry and acoustic backscatter are calculated from the MBES data and then used in a hierarchical decision-tree framework to classify the MBES data into areas of rock, gravelly muddy sand, muddy sand, and mud. A quantitative accuracy assessment on the classification results is performed using ground-truth sediment samples. The predicted facies map is also ground-truthed using seafloor photographs and high-resolution sub-bottom seismic-reflection profiles. This Open-File Report contains the predicted seafloor facies map as a georeferenced TIFF image along with the multibeam bathymetry and acoustic backscatter data used in the study as well as an explanation of the empirical classification process.
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Tao, W.-K.; Lau, W. K.; Atlas, R.
2010-01-01
Very severe cyclonic storm Nargis devastated Burma (Myanmar) in May 2008, caused tremendous damage and numerous fatalities, and became one of the 10 deadliest tropical cyclones (TCs) of all time. To increase the warning time in order to save lives and reduce economic damage, it is important to extend the lead time in the prediction of TCs like Nargis. As recent advances in high-resolution global models and supercomputing technology have shown the potential for improving TC track and intensity forecasts, the ability of a global mesoscale model to predict TC genesis in the Indian Ocean is examined in this study with the aim of improving simulations of TC climate. High-resolution global simulations with real data show that the initial formation and intensity variations of TC Nargis can be realistically predicted up to 5 days in advance. Preliminary analysis suggests that improved representations of the following environmental conditions and their hierarchical multiscale interactions were the key to achieving this lead time: (1) a westerly wind burst and equatorial trough, (2) an enhanced monsoon circulation with a zero wind shear line, (3) good upper-level outflow with anti-cyclonic wind shear between 200 and 850 hPa, and (4) low-level moisture convergence.