Science.gov

Sample records for multi-dimensional scaling analysis

  1. The multi-dimensional model of M?ori identity and cultural engagement: item response theory analysis of scale properties.

    PubMed

    Sibley, Chris G; Houkamau, Carla A

    2013-01-01

    We argue that there is a need for culture-specific measures of identity that delineate the factors that most make sense for specific cultural groups. One such measure, recently developed specifically for M?ori peoples, is the Multi-Dimensional Model of M?ori Identity and Cultural Engagement (MMM-ICE). M?ori are the indigenous peoples of New Zealand. The MMM-ICE is a 6-factor measure that assesses the following aspects of identity and cultural engagement as M?ori: (a) group membership evaluation, (b) socio-political consciousness, (c) cultural efficacy and active identity engagement, (d) spirituality, (e) interdependent self-concept, and (f) authenticity beliefs. This article examines the scale properties of the MMM-ICE using item response theory (IRT) analysis in a sample of 492 M?ori. The MMM-ICE subscales showed reasonably even levels of measurement precision across the latent trait range. Analysis of age (cohort) effects further indicated that most aspects of M?ori identification tended to be higher among older M?ori, and these cohort effects were similar for both men and women. This study provides novel support for the reliability and measurement precision of the MMM-ICE. The study also provides a first step in exploring change and stability in M?ori identity across the life span. A copy of the scale, along with recommendations for scale scoring, is included. PMID:23356361

  2. Multi-Dimensional Shallow Landslide Stability Analysis Suitable for Application at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Milledge, D.; Bellugi, D.; McKean, J. A.; Dietrich, W.

    2012-12-01

    The infinite slope model is the basis for almost all watershed scale slope stability models. However, it assumes that a potential landslide is infinitely long and wide. As a result, it cannot represent resistance at the margins of a potential landslide (e.g. from lateral roots), and is unable to predict the size of a potential landslide. Existing three-dimensional models generally require computationally expensive numerical solutions and have previously been applied only at the hillslope scale. Here we derive an alternative analytical treatment that accounts for lateral resistance by representing the forces acting on each margin of an unstable block. We apply 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure using a log-spiral method on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion is an exponential function of soil depth. We benchmark this treatment against other more complete approaches (Finite Element (FE) and closed form solutions) and find that our model: 1) converges on the infinite slope predictions as length / depth and width / depth ratios become large; 2) agrees with the predictions from state-of-the-art FE models to within +/- 30% error, for the specific cases in which these can be applied. We then test our model's ability to predict failure of an actual (mapped) landslide where the relevant parameters are relatively well constrained. We find that our model predicts failure at the observed location with a nearly identical shape and predicts that larger or smaller shapes conformal to the observed shape are indeed more stable. Finally, we perform a sensitivity analysis using our model to show that lateral reinforcement sets a minimum landslide size, while the additional strength at the downslope boundary means that the optimum shape for a given size is longer in a downslope direction. However, reinforcement effects cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of spatial patterns of key parameters (e.g. pore water pressure) and motivating the model's watershed scale application. This watershed scale application requires an efficient method to find the least stable shapes among an almost infinite set. However, when applied in this context, it allows a more complete examination of the controls on landslide size, shape and location.

  3. Multi-Scale Multi-Dimensional Ion Battery Performance Model

    Energy Science and Technology Software Center (ESTSC)

    2007-05-07

    The Multi-Scale Multi-Dimensional (MSMD) Lithium Ion Battery Model allows for computer prediction and engineering optimization of thermal, electrical, and electrochemical performance of lithium ion cells with realistic geometries. The model introduces separate simulation domains for different scale physics, achieving much higher computational efficiency compared to the single domain approach. It solves a one dimensional electrochemistry model in a micro sub-grid system, and captures the impacts of macro-scale battery design factors on cell performance and materialmore » usage by solving cell-level electron and heat transports in a macro grid system.« less

  4. Exploring perceptually similar cases with multi-dimensional scaling

    NASA Astrophysics Data System (ADS)

    Wang, Juan; Yang, Yongyi; Wernick, Miles N.; Nishikawa, Robert M.

    2014-03-01

    Retrieving a set of known lesions similar to the one being evaluated might be of value for assisting radiologists to distinguish between benign and malignant clustered microcalcifications (MCs) in mammograms. In this work, we investigate how perceptually similar cases with clustered MCs may relate to one another in terms of their underlying characteristics (from disease condition to image features). We first conduct an observer study to collect similarity scores from a group of readers (five radiologists and five non-radiologists) on a set of 2,000 image pairs, which were selected from 222 cases based on their images features. We then explore the potential relationship among the different cases as revealed by their similarity ratings. We apply the multi-dimensional scaling (MDS) technique to embed all the cases in a 2-D plot, in which perceptually similar cases are placed in close vicinity of one another based on their level of similarity. Our results show that cases having different characteristics in their clustered MCs are accordingly placed in different regions in the plot. Moreover, cases of same pathology tend to be clustered together locally, and neighboring cases (which are more similar) tend to be also similar in their clustered MCs (e.g., cluster size and shape). These results indicate that subjective similarity ratings from the readers are well correlated with the image features of the underlying MCs of the cases, and that perceptually similar cases could be of diagnostic value for discriminating between malignant and benign cases.

  5. Development of a Multi-Dimensional Scale for PDD and ADHD

    ERIC Educational Resources Information Center

    Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya

    2011-01-01

    A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of

  6. Development of a Multi-Dimensional Scale for PDD and ADHD

    ERIC Educational Resources Information Center

    Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya

    2011-01-01

    A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…

  7. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    SciTech Connect

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-06-08

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  8. Coupling visualization and data analysis for knowledge discovery from multi-dimensional scientific data

    PubMed Central

    Rübel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keränen, Soile V. E.; Knowles, David W.; Hendriks, Cris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2013-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies —such as efficient data management— supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach. PMID:23762211

  9. The Use of Semantic Differential Scaling to Define the Multi-Dimensional Representation of Odors.

    PubMed

    Dalton, Pamela; Maute, Christopher; Oshida, Akiko; Hikichi, Satoshi; Izumi, Yu

    2008-01-01

    The mental representation elicited by smelling an odor often consists of multiple sensory and affective dimensions, yet, the richness of this elaboration is difficult to capture using methods to rate the intensity of these factors in isolation. Attempts to use language descriptors for olfactory experience have also been shown to be rather limited; among non-specialists, there is no universally accepted system for describing odors, leading to greater reliance on specific item associations. In this study we explored the utility of semantic differential scaling for illustrating the various dimensions of olfactory experience. 300 volunteers rated thirty distinct odorants using 50 SDS adjectives. Three factors emerged from the analysis (based on 17 adjective-pairs) accounting for 53% of the variance, and corresponding to the evaluation, potency and activity dimensions identified for other stimulus types. SD scaling appears to be a viable method for identifying the multiple dimensions of mental representation evoked when smelling an odorant and may prove a useful tool for both consumer and basic research alike. PRACTICAL APPLICATIONS: Although numerous methods of classifying odors have been developed, little agreement has been achieved on the dimensions that are useful to both basic and consumer research. The identification of a set of Semantic Differential adjectives which are relevant to olfactory experience can become a useful tool for classifying the qualitative and affective basis on which odorants differ.. In particular, the degree to which odorants evokes multi-dimensional representations from other sensory modalities (visual, auditory, somatosensory or gustatory), can be usefully applied in the arena of product development both within and across cultures. PMID:19122880

  10. How Fitch-Margoliash Algorithm can Benefit from Multi Dimensional Scaling

    PubMed Central

    Lespinats, Sylvain; Grando, Delphine; Marchal, Eric; Hakimi, Mohamed-Ali; Tenaillon, Olivier; Bastien, Olivier

    2011-01-01

    Whatever the phylogenetic method, genetic sequences are often described as strings of characters, thus molecular sequences can be viewed as elements of a multi-dimensional space. As a consequence, studying motion in this space (ie, the evolutionary process) must deal with the amazing features of high-dimensional spaces like concentration of measured phenomenon. To study how these features might influence phylogeny reconstructions, we examined a particular popular method: the Fitch-Margoliash algorithm, which belongs to the Least Squares methods. We show that the Least Squares methods are closely related to Multi Dimensional Scaling. Indeed, criteria for Fitch-Margoliash and Sammons mapping are somewhat similar. However, the prolific research in Multi Dimensional Scaling has definitely allowed outclassing Sammons mapping. Least Square methods for tree reconstruction can now take advantage of these improvements. However, false neighborhood and tears are the two main risks in dimensionality reduction field: false neighborhood corresponds to a widely separated data in the original space that are found close in representation space, and neighbor data that are displayed in remote positions constitute a tear. To address this problem, we took advantage of the concepts of continuity and trustworthiness in the tree reconstruction field, which limit the risk of false neighborhood and tears. We also point out the concentration of measured phenomenon as a source of error and introduce here new criteria to build phylogenies with improved preservation of distances and robustness. The authors and the Evolutionary Bioinformatics Journal dedicate this article to the memory of Professor W.M. Fitch (19292011). PMID:21697992

  11. A Multi-Dimensional Cognitive Analysis of Undergraduate Physics Students' Understanding of Heat Conduction

    ERIC Educational Resources Information Center

    Chiou, Guo-Li; Anderson, O. Roger

    2010-01-01

    This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses

  12. A Multi-Dimensional Cognitive Analysis of Undergraduate Physics Students' Understanding of Heat Conduction

    ERIC Educational Resources Information Center

    Chiou, Guo-Li; Anderson, O. Roger

    2010-01-01

    This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…

  13. DYVIPAC: an integrated analysis and visualisation framework to probe multi-dimensional biological networks

    PubMed Central

    Nguyen, Lan K.; Degasperi, Andrea; Cotter, Philip; Kholodenko, Boris N.

    2015-01-01

    Biochemical networks are dynamic and multi-dimensional systems, consisting of tens or hundreds of molecular components. Diseases such as cancer commonly arise due to changes in the dynamics of signalling and gene regulatory networks caused by genetic alternations. Elucidating the network dynamics in health and disease is crucial to better understand the disease mechanisms and derive effective therapeutic strategies. However, current approaches to analyse and visualise systems dynamics can often provide only low-dimensional projections of the network dynamics, which often does not present the multi-dimensional picture of the system behaviour. More efficient and reliable methods for multi-dimensional systems analysis and visualisation are thus required. To address this issue, we here present an integrated analysis and visualisation framework for high-dimensional network behaviour which exploits the advantages provided by parallel coordinates graphs. We demonstrate the applicability of the framework, named “Dynamics Visualisation based on Parallel Coordinates” (DYVIPAC), to a variety of signalling networks ranging in topological wirings and dynamic properties. The framework was proved useful in acquiring an integrated understanding of systems behaviour. PMID:26220783

  14. Method of multi-dimensional moment analysis for the characterization of signal peaks

    DOEpatents

    Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A

    2012-10-23

    A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.

  15. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    SciTech Connect

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  16. Overview of Computer-Aided Engineering of Batteries and Introduction to Multi-Scale, Multi-Dimensional Modeling of Li-Ion Batteries (Presentation)

    SciTech Connect

    Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

    2012-05-01

    This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

  17. Multi-dimensional PARAFAC2 component analysis of multi-channel EEG data including temporal tracking.

    PubMed

    Weis, Martin; Jannek, Dunja; Roemer, Florian; Guenther, Thomas; Haardt, Martin; Husar, Peter

    2010-01-01

    The identification of signal components in electroencephalographic (EEG) data originating from neural activities is a long standing problem in neuroscience. This area has regained new attention due to the possibilities of multi-dimensional signal processing. In this work we analyze measured visual-evoked potentials on the basis of the time-varying spectrum for each channel. Recently, parallel factor (PARAFAC) analysis has been used to identify the signal components in the space-time-frequency domain. However, the PARAFAC decomposition is not able to cope with components appearing time-shifted over the different channels. Furthermore, it is not possible to track PARAFAC components over time. In this contribution we derive how to overcome these problems by using the PARAFAC2 model, which renders it an attractive approach for processing EEG data with highly dynamic (moving) sources. PMID:21096263

  18. MD-SeeGH: a platform for integrative analysis of multi-dimensional genomic data

    PubMed Central

    Chi, Bryan; deLeeuw, Ronald J; Coe, Bradley P; Ng, Raymond T; MacAulay, Calum; Lam, Wan L

    2008-01-01

    Background Recent advances in global genomic profiling methodologies have enabled multi-dimensional characterization of biological systems. Complete analysis of these genomic profiles require an in depth look at parallel profiles of segmental DNA copy number status, DNA methylation state, single nucleotide polymorphisms, as well as gene expression profiles. Due to the differences in data types it is difficult to conduct parallel analysis of multiple datasets from diverse platforms. Results To address this issue, we have developed an integrative genomic analysis platform MD-SeeGH, a software tool that allows users to rapidly and directly analyze genomic datasets spanning multiple genomic experiments. With MD-SeeGH, users have the flexibility to easily update datasets in accordance with new genomic builds, make a quality assessment of data using the filtering features, and identify genetic alterations within single or across multiple experiments. Multiple sample analysis in MD-SeeGH allows users to compare profiles from many experiments alongside tracks containing detailed localized gene information, microRNA, CpG islands, and copy number variations. Conclusion MD-SeeGH is a new platform for the integrative analysis of diverse microarray data, facilitating multiple profile analyses and group comparisons. PMID:18492270

  19. Effective use of metadata in the integration and analysis of multi-dimensional optical data

    NASA Astrophysics Data System (ADS)

    Pastorello, G. Z.; Gamon, J. A.

    2012-12-01

    Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The DAX (Data Acquisition to eXchange) Web-based tool uses a flexible metadata representation model and takes advantage of multi-dimensional data structures to translate metadata types into data dimensions, effectively reshaping data sets according to available metadata. With that, metadata is tightly integrated into the acquisition-to-exchange cycle, allowing for more focused exploration of data sets while also increasing the value of, and incentives for, keeping good metadata. The tool is being developed and tested with optical data collected in different settings, including laboratory, field, airborne, and satellite platforms.

  20. Bio-inspired hierarchical self-assembly of nanotubes into multi-dimensional and multi-scale structures.

    PubMed

    Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue

    2012-01-01

    As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of (1)H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. PMID:22075963

  1. Bio-inspired hierarchical self-assembly of nanotubes into multi-dimensional and multi-scale structures

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue

    2011-12-01

    As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed.As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. Electronic supplementary information (ESI) available: Characterization of the A/TNTs and TNT crystals. See DOI: 10.1039/c1nr11151e

  2. Evaluating the use of HILIC in large-scale, multi dimensional proteomics: Horses for courses?

    PubMed Central

    Bensaddek, Dalila; Nicolas, Armel; Lamond, Angus I.

    2015-01-01

    Despite many recent advances in instrumentation, the sheer complexity of biological samples remains a major challenge in large-scale proteomics experiments, reflecting both the large number of protein isoforms and the wide dynamic range of their expression levels. However, while the dynamic range of expression levels for different components of the proteome is estimated to be ∼107–8, the equivalent dynamic range of LC–MS is currently limited to ∼106. Sample pre-fractionation has therefore become routinely used in large-scale proteomics to reduce sample complexity during MS analysis and thus alleviate the problem of ion suppression and undersampling. There is currently a wide range of chromatographic techniques that can be applied as a first dimension separation. Here, we systematically evaluated the use of hydrophilic interaction liquid chromatography (HILIC), in comparison with hSAX, as a first dimension for peptide fractionation in a bottom-up proteomics workflow. The data indicate that in addition to its role as a useful pre-enrichment method for PTM analysis, HILIC can provide a robust, orthogonal and high-resolution method for increasing the depth of proteome coverage in large-scale proteomics experiments. The data also indicate that the choice of using either HILIC, hSAX, or other methods, is best made taking into account the specific types of biological analyses being performed. PMID:26869852

  3. The use of multi-dimensional flow and morphodynamic models for restoration design analysis

    NASA Astrophysics Data System (ADS)

    McDonald, R.; Nelson, J. M.

    2013-12-01

    River restoration projects with the goal of restoring a wide range of morphologic and ecologic channel processes and functions have become common. The complex interactions between flow and sediment-transport make it challenging to design river channels that are both self-sustaining and improve ecosystem function. The relative immaturity of the field of river restoration and shortcomings in existing methodologies for evaluating channel designs contribute to this problem, often leading to project failures. The call for increased monitoring of constructed channels to evaluate which restoration techniques do and do not work is ubiquitous and may lead to improved channel restoration projects. However, an alternative approach is to detect project flaws before the channels are built by using numerical models to simulate hydraulic and sediment-transport processes and habitat in the proposed channel (Restoration Design Analysis). Multi-dimensional models provide spatially distributed quantities throughout the project domain that may be used to quantitatively evaluate restoration designs for such important metrics as (1) the change in water-surface elevation which can affect the extent and duration of floodplain reconnection, (2) sediment-transport and morphologic change which can affect the channel stability and long-term maintenance of the design; and (3) habitat changes. These models also provide an efficient way to evaluate such quantities over a range of appropriate discharges including low-probability events which often prove the greatest risk to the long-term stability of restored channels. Currently there are many free and open-source modeling frameworks available for such analysis including iRIC, Delft3D, and TELEMAC. In this presentation we give examples of Restoration Design Analysis for each of the metrics above from projects on the Russian River, CA and the Kootenai River, ID. These examples demonstrate how detailed Restoration Design Analysis can be used to guide design elements and how this method can point out potential stability problems or other risks before designs proceed to the construction phase.

  4. A Structure-Based Distance Metric for High-Dimensional Space Exploration with Multi-Dimensional Scaling

    SciTech Connect

    Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus

    2014-03-01

    Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.

  5. Assessing Children's Homework Performance: Development of Multi-Dimensional, Multi-Informant Rating Scales

    ERIC Educational Resources Information Center

    Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.

    2007-01-01

    Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as…

  6. Multi-dimensional reliability assessment of fractal signature analysis in an outpatient sports medicine population.

    PubMed

    Jarraya, Mohamed; Guermazi, Ali; Niu, Jingbo; Duryea, Jeffrey; Lynch, John A; Roemer, Frank W

    2015-11-01

    The aim of this study has been to test reproducibility of fractal signature analysis (FSA) in a young, active patient population taking into account several parameters including intra- and inter-reader placement of regions of interest (ROIs) as well as various aspects of projection geometry. In total, 685 patients were included (135 athletes and 550 non-athletes, 18-36 years old). Regions of interest (ROI) were situated beneath the medial tibial plateau. The reproducibility of texture parameters was evaluated using intraclass correlation coefficients (ICC). Multi-dimensional assessment included: (1) anterior-posterior (A.P.) vs. posterior-anterior (P.A.) (Lyon-Schuss technique) views on 102 knees; (2) unilateral (single knee) vs. bilateral (both knees) acquisition on 27 knees (acquisition technique otherwise identical; same A.P. or P.A. view); (3) repetition of the same image acquisition on 46 knees (same A.P. or P.A. view, and same unitlateral or bilateral acquisition); and (4) intra- and inter-reader reliability with repeated placement of the ROIs in the subchondral bone area on 99 randomly chosen knees. ICC values on the reproducibility of texture parameters for A.P. vs. P.A. image acquisitions for horizontal and vertical dimensions combined were 0.72 (95% confidence interval (CI) 0.70-0.74) ranging from 0.47 to 0.81 for the different dimensions. For unilateral vs. bilateral image acquisitions, the ICCs were 0.79 (95% CI 0.76-0.82) ranging from 0.55 to 0.88. For the repetition of the identical view, the ICCs were 0.82 (95% CI 0.80-0.84) ranging from 0.67 to 0.85. Intra-reader reliability was 0.93 (95% CI 0.92-0.94) and inter-observer reliability was 0.96 (95% CI 0.88-0.99). A decrease in reliability was observed with increasing voxel sizes. Our study confirms excellent intra- and inter-reader reliability for FSA, however, results seem to be affected by acquisition technique, which has not been previously recognized. PMID:26343866

  7. Relationships between Organizations and Publics: Development of a Multi-Dimensional Organization-Public Relationship Scale.

    ERIC Educational Resources Information Center

    Bruning, Stephen D.; Ledingham, John A.

    1999-01-01

    Attempts to design a multiple-item, multiple-dimension organization/public relationship scale. Finds that organizations and key publics have three types of relationships: professional, personal, and community. Provides an instrument that can be used to measure the influence that perceptions of the organization/public relationship have on consumer

  8. Psychophysical similarity measure based on multi-dimensional scaling for retrieval of similar images of breast masses on mammograms

    NASA Astrophysics Data System (ADS)

    Nishimura, Kohei; Muramatsu, Chisako; Oiwa, Mikinao; Shiraiwa, Misaki; Endo, Tokiko; Doi, Kunio; Fujita, Hiroshi

    2013-02-01

    For retrieving reference images which may be useful to radiologists in their diagnosis, it is necessary to determine a reliable similarity measure which would agree with radiologists' subjective impression. In this study, we propose a new similarity measure for retrieval of similar images, which may assist radiologists in the distinction between benign and malignant masses on mammograms, and investigated its usefulness. In our previous study, to take into account the subjective impression, the psychophysical similarity measure was determined by use of an artificial neural network (ANN), which was employed to learn the relationship between radiologists' subjective similarity ratings and image features. In this study, we propose a psychophysical similarity measure based on multi-dimensional scaling (MDS) in order to improve the accuracy in retrieval of similar images. Twenty-seven images of masses, 3 each from 9 different pathologic groups, were selected, and the subjective similarity ratings for all possible 351 pairs were determined by 8 expert physicians. MDS was applied using the average subjective ratings, and the relationship between each output axis and image features was modeled by the ANN. The MDS-based psychophysical measures were determined by the distance in the modeled space. With a leave-one-out test method, the conventional psychophysical similarity measure was moderately correlated with subjective similarity ratings (r=0.68), whereas the psychophysical measure based on MDS was highly correlated (r=0.81). The result indicates that a psychophysical similarity measure based on MDS would be useful in the retrieval of similar images.

  9. Similarity from multi-dimensional scaling: solving the accuracy and diversity dilemma in information filtering.

    PubMed

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

  10. Similarity from Multi-Dimensional Scaling: Solving the Accuracy and Diversity Dilemma in Information Filtering

    PubMed Central

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

  11. Nitrogen deposition and multi-dimensional plant diversity at the landscape scale

    PubMed Central

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat

    2015-01-01

    Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1?km2 plots covering most habitat types of Central Europe and an elevational range of 2900?m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale. PMID:26064640

  12. Large-Scale Multi-Dimensional Document Clustering on GPU Clusters

    SciTech Connect

    Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E

    2010-01-01

    Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteennode GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrates the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.

  13. Facile multi-dimensional profiling of chemical gradients at the millimetre scale.

    PubMed

    Chen, Chih-Lin; Hsieh, Kai-Ta; Hsu, Ching-Fong; Urban, Pawel L

    2016-01-01

    A vast number of conventional physicochemical methods are suitable for the analysis of homogeneous samples. However, in various cases, the samples exhibit intrinsic heterogeneity. Tomography allows one to record approximate distributions of chemical species in the three-dimensional space. Here we develop a simple optical tomography system which enables performing scans of non-homogeneous samples at different wavelengths. It takes advantage of inexpensive open-source electronics and simple algorithms. The analysed samples are illuminated by a miniature LCD/LED screen which emits light at three wavelengths (598, 547 and 455 nm, corresponding to the R, G, and B channels, respectively). On presentation of every wavelength, the sample vial is rotated by ∼180°, and videoed at 30 frames per s. The RGB values of pixels in the obtained digital snapshots are subsequently collated, and processed to produce sinograms. Following the inverse Radon transform, approximate quasi-three-dimensional images are reconstructed for each wavelength. Sample components with distinct visible light absorption spectra (myoglobin, methylene blue) can be resolved. The system was used to follow dynamic changes in non-homogeneous samples in real time, to visualize binary mixtures, to reconstruct reaction-diffusion fronts formed during the reduction of 2,6-dichlorophenolindophenol by ascorbic acid, and to visualize the distribution of fungal mycelium grown in a semi-solid medium. PMID:26541202

  14. magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation

    NASA Astrophysics Data System (ADS)

    Angleraud, Christophe

    2014-06-01

    The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.

  15. Sorting multiple classes in multi-dimensional ROC analysis: parametric and nonparametric approaches.

    PubMed

    Li, Jialiang; Chow, Yanyu; Wong, Weng Kee; Wong, Tien Yin

    2014-02-01

    In large-scale data analysis, such as in a microarray study to identify the most differentially expressed genes, diagnostic tests are frequently used to classify and predict subjects into their different categories. Frequently, these categories do not have an intrinsic natural order even though the quantitative test results have a relative order. As identifying the correct order for a proper definition of accuracy measures is important for a high-dimensional receiver operating characteristic (ROC) analysis, we propose rigorous and automated approaches to sort out the multiple categories using simple summary statistics such as means and relative effects. We discuss the hypervolume under the ROC manifold (HUM), its dependence on the order of the test results and the minimum acceptable HUM values in a general multi-category classification problem. Using a leukemia data set and a liver cancer data set, we show how our approaches provide accurate screening results when we have a large number of tests. PMID:24329017

  16. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling

    PubMed Central

    Riordan, Daniel P.; Varma, Sushama; West, Robert B.; Brown, Patrick O.

    2015-01-01

    Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP) that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells) were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples) for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the development of computer-based systems for automatically parsing relevant histological and cellular features from molecular imaging data of arbitrary human tissue samples, and can provide a framework and resource to spur the optimization of these technologies. PMID:26176839

  17. Pedagogic discourse in introductory classes: Multi-dimensional analysis of textbooks and lectures in biology and macroeconomics

    NASA Astrophysics Data System (ADS)

    Carkin, Susan

    The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being relatively less marked for information density, abstraction, and non-overt argumentation. In addition to the findings mentioned here, numerous other relationships among the texts exhibit complex patterns of variation related to a number of situational variables. Pedagogical implications are discussed in relation to General Education courses, differing student populations, and the reading and listening demands which students encounter in large introductory classes in the university.

  18. Multi-dimensional TOF-SIMS analysis for effective profiling of disease-related ions from the tissue surface

    PubMed Central

    Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol

    2015-01-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states. PMID:26046669

  19. Multi-dimensional TOF-SIMS analysis for effective profiling of disease-related ions from the tissue surface

    NASA Astrophysics Data System (ADS)

    Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol

    2015-06-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states.

  20. Multi-dimensional analysis of combustion instabilities in liquid rocket motors

    NASA Technical Reports Server (NTRS)

    Grenda, Jeffrey M.; Venkateswaran, Sankaran; Merkle, Charles L.

    1992-01-01

    A three-dimensional analysis of combustion instabilities in liquid rocket engines is presented based on a mixed finite difference/spectral solution methodology for the gas phase and a discrete droplet tracking formulation for the liquid phase. Vaporization is treated by a simplified model based on an infinite thermal conductivitiy assumption for spherical liquid droplets of fuel in a convective environment undergoing transient heating. A simple two parameter phenomenological combustion response model is employed for validation of the results in the small amplitude regime. The computational procedure is demonstrated to capture the phenomena of wave propagation within the combustion chamber accurately. Results demonstrate excellent amplitude and phase agreement with analytical solutions for properly selected grid resolutions under both stable and unstable operating conditions. Computations utilizing the simplified droplet model demonstrate stable response to arbitrary pulsing. This is possibly due to the assumption of uniform droplet temperature which removes the thermal inertia time-lag response of the vaporization process. The mixed-character scheme is sufficiently efficient to allow solutions on workstations at a modest increase in computational time over that required for two-dimensional solutions.

  1. Documentation, Multi-scale and Multi-dimensional Representation of Cultural Heritage for the Policies of Redevelopment, Development and Regeneration

    NASA Astrophysics Data System (ADS)

    De Masi, A.

    2015-09-01

    The paper describes reading criteria for the documentation for important buildings in Milan, Italy, as a case study of the research on the integration of new technologies to obtain 3D multi-scale representation architectures. In addition, affords an overview of the actual optical 3D measurements sensors and techniques used for surveying, mapping, digital documentation and 3D modeling applications in the Cultural Heritage field. Today new opportunities for an integrated management of data are given by multiresolution models, that can be employed for different scale of representation. The goal of multi-scale representations is to provide several representations where each representation is adapted to a different information density with several degrees of detail. The Digital Representation Platform, along with the 3D City Model, are meant to be particularly useful to heritage managers who are developing recording, documentation, and information management strategies appropriate to territories, sites and monuments. Digital Representation Platform and 3D City Model are central activities in a the decision-making process for heritage conservation management and several urban related problems. This research investigates the integration of the different level-of-detail of a 3D City Model into one consistent 4D data model with the creation of level-of-detail using algorithms from a GIS perspective. In particular, such project is based on open source smart systems, and conceptualizes a personalized and contextualized exploration of the Cultural Heritage through an experiential analysis of the territory.

  2. Strong relaxation limit of multi-dimensional isentropic Euler equations

    NASA Astrophysics Data System (ADS)

    Xu, Jiang

    2010-06-01

    This paper is devoted to study the strong relaxation limit of multi-dimensional isentropic Euler equations with relaxation. Motivated by the Maxwell iteration, we generalize the analysis of Yong (SIAM J Appl Math 64:1737-1748, 2004) and show that, as the relaxation time tends to zero, the density of a certain scaled isentropic Euler equations with relaxation strongly converges towards the smooth solution to the porous medium equation in the framework of Besov spaces with relatively lower regularity. The main analysis tool used is the Littlewood-Paley decomposition.

  3. Enantiomeric analysis of anatabine, nornicotine and anabasine in commercial tobacco by multi-dimensional gas chromatography and mass spectrometry.

    PubMed

    Liu, Baizhan; Chen, Chaoying; Wu, Da; Su, Qingde

    2008-04-01

    A fully automated multi-dimensional gas chromatography (MDGC) system with a megabore precolumn and cyclodextrin-based analytical column was developed to analyze the enantiomeric compositions of anatabine, nornicotine and anabasine in commercial tobacco. The enantiomer abundances of anatabine and nornicotine varied among different tobacco. S-(-)-anatabine, as a proportion of total anatabine, was 86.6% for flue-cured, 86.0% for burley and 77.5% for oriental tobacco. S-(-)-nornicotine, as a proportion of total nornicotine, was 90.8% in oriental tobacco and higher than in burley (69.4%) and flue-cured (58.7%) tobacco. S-(-)-anabasine, as a proportion of total anabasine, was relatively constant for flue-cured (60.1%), burley (65.1%) and oriental (61.7%) tobacco. A simple solvent extraction with dichloromethane followed by derivatisation with trifluoroacetic anhydride gave relative standard deviations of less than 1.5% for the determination of the S-(-)-isomers of all three alkaloids. The study also indicated that, a higher proportion of S-(-)-nornicotine is related to the more active nicotine demethylation in the leaf. PMID:18342587

  4. Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis

    SciTech Connect

    Jeong, Myong K; Kong, Seong G; Omitaomu, Olufemi A

    2008-09-01

    Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.

  5. A robust method for quantitative identification of ordered cores in an ensemble of biomolecular structures by non-linear multi-dimensional scaling using inter-atomic distance variance matrix.

    PubMed

    Kobayashi, Naohiro

    2014-01-01

    Superpositioning of atoms in an ensemble of biomolecules is a common task in a variety of fields in structural biology. Although several automated tools exist based on previously established methods, manual operations to define the atoms in the ordered regions are usually preferred. The task is difficult and lacks output efficiency for multi-core proteins having complicated folding topology. The new method presented here can systematically and quantitatively achieve the identification of ordered cores even for molecules containing multiple cores linked with flexible loops. In contrast to established methods, this method treats the variance of inter-atomic distances in an ensemble as information content using a non-linear (NL) function, and then subjects it to multi-dimensional scaling (MDS) to embed the row vectors in the inter-atomic distance variance matrix into a lower dimensional matrix. The plots of the identified atom groups in a one or two-dimensional map enables users to visually and intuitively infer well-ordered atoms in an ensemble, as well as to automatically identify them by the standard clustering methods. The performance of the NL-MDS method has been examined for number of structure ensembles studied by nuclear magnetic resonance, demonstrating that the method can be more suitable for structural analysis of multi-core proteins in comparison to previously established methods. PMID:24384868

  6. Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.

  7. Multi Dimensional Phase Only Filter

    SciTech Connect

    Gudmundsson, K; Awwal, A

    2004-07-13

    Today's sensor networks provide a wide variety of application domain for high-speed pattern classification systems. Such high-speed systems can be achieved by the use of optical implementation of specialized POF correlator. In this research we discuss the modeling and simulation of the phase only filter (POF) in the task of pattern classification of multi-dimensional data.

  8. Optimization of a sample preparation method for multiresidue analysis of pesticides in tobacco by single and multi-dimensional gas chromatography-mass spectrometry.

    PubMed

    Khan, Zareen S; Ghosh, Rakesh Kumar; Girame, Rushali; Utture, Sagar C; Gadgil, Manasi; Banerjee, Kaushik; Reddy, D Damodar; Johnson, Nalli

    2014-05-23

    A selective and sensitive multiresidue analysis method, comprising 4 7pesticides, was developed and validated in tobacco matrix. The optimized sample preparation procedure in combination with gas chromatography mass spectrometry in selected-ion-monitoring (GC-MS/SIM) mode offered limits of detection (LOD) and quantification (LOQ) in the range of 3-5 and 7.5-15ng/g, respectively, with recoveries between 70 and 119% at 50-100ng/g fortifications. In comparison to the modified QuEChERS (Quick-Easy-Cheap-Effective-Rugged-Safe method: 2g tobacco+10ml water+10ml acetonitrile, 30min vortexing, followed by dispersive solid phase extraction cleanup), the method performed better in minimizing matrix co-extractives e.g. nicotine and megastigmatrienone. Ambiguity in analysis due to co-elution of target analytes (e.g. transfluthrin-heptachlor) and with matrix co-extractives (e.g. ?-HCH-neophytadiene, 2,4-DDE-linolenic acid) could be resolved by selective multi-dimensional (MD)GC heart-cuts. The method holds promise in routine analysis owing to noticeable efficiency of 27 samples/person/day. PMID:24746872

  9. Multi-Dimensional Analysis of Large, Complex Slope Instability: Case study of Downie Slide, British Columbia, Canada. (Invited)

    NASA Astrophysics Data System (ADS)

    Kalenchuk, K. S.; Hutchinson, D.; Diederichs, M. S.

    2013-12-01

    Downie Slide, one of the world's largest landslides, is a massive, active, composite, extremely slow rockslide located on the west bank of the Revelstoke Reservoir in British Columbia. It is a 1.5 billion m3 rockslide measuring 2400 m along the river valley, 3300m from toe to headscarp and up to 245 m thick. Significant contributions to the field of landslide geomechanics have been made by analyses of spatially and temporally discriminated slope deformations, and how these are controlled by complex geological and geotechnical factors. Downie Slide research demonstrates the importance of delineating massive landslides into morphological regions in order to characterize global slope behaviour and identify localized events, which may or may not influence the overall slope deformation patterns. Massive slope instabilities do not behave as monolithic masses, rather, different landslide zones can display specific landslide processes occurring at variable rates of deformation. The global deformation of Downie Slide is extremely slow moving; however localized regions of the slope incur moderate to high rates of movement. Complex deformation processes and composite failure mechanism are contributed to by topography, non-uniform shear surfaces, heterogeneous rockmass and shear zone strength and stiffness characteristics. Further, from the analysis of temporal changes in landslide behaviour it has been clearly recognized that different regions of the slope respond differently to changing hydrogeological boundary conditions. State-of-the-art methodologies have been developed for numerical simulation of large landslides; these provide important tools for investigating dynamic landslide systems which account for complex three-dimensional geometries, heterogenous shear zone strength parameters, internal shear zones, the interaction of discrete landslide zones and piezometric fluctuations. Numerical models of Downie Slide have been calibrated to reproduce observed slope behaviour, and the calibration process has provided important insight to key factors controlling massive slope mechanics. Through numerical studies it has been shown that the three-dimensional interpretation of basal slip surface geometry and spatial heterogeneity in shear zone stiffness are important factors controlling large-scale slope deformation processes. The role of secondary internal shears and the interaction between landslide morphological zones has also been assessed. Further, numerical simulation of changing groundwater conditions has produced reasonable correlation with field observations. Calibrated models are valuable tools for the forward prediction of landslide dynamics. Calibrated Downie Slide models have been used to investigate how trigger scenarios may accelerate deformations at Downie Slide. The ability to reproduce observed behaviour and forward test hypothesized changes to boundary conditions has valuable application in hazard management of massive landslides. The capacity of decision makers to interpret large amounts of data, respond to rapid changes in a system and understand complex slope dynamics has been enhanced.

  10. A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system

    NASA Astrophysics Data System (ADS)

    Mix, Ken

    The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979--2007. Changes in streamflow, water allocations and water policy were observed in all agriculture periods. Cross-scale linkages were also evident between climate and streamflow; policy and water rights; and agriculture, groundwater pumping and streamflow.

  11. High-School Exit Examinations and the Schooling Decisions of Teenagers: A Multi-Dimensional Regression-Discontinuity Analysis. NBER Working Paper No. 17112

    ERIC Educational Resources Information Center

    Papay, John P.; Willett, John B.; Murnane, Richard J.

    2011-01-01

    We ask whether failing one or more of the state-mandated high-school exit examinations affects whether students graduate from high school. Using a new multi-dimensional regression-discontinuity approach, we examine simultaneously scores on mathematics and English language arts tests. Barely passing both examinations, as opposed to failing them,

  12. Condensation heat transfer analysis of the passive containment cooling system of the Purdue University Multi-dimensional Integral Test Assembly

    NASA Astrophysics Data System (ADS)

    Wilmarth de Leonardi, Tauna Lea

    2000-10-01

    The development of a reliable containment cooling system is one of the key areas in advanced nuclear reactor development. There are two categories of containment cooling: active and passive. The active containment cooling consists usually of systems that require active participation in their use. The passive systems have, in the past, been reliant on the supply of electrical power. This has instigated worldwide efforts in the development of passive containment cooling systems that are safer, more reliable, and simpler in their use. The passive containment cooling system's performance is deteriorated by noncondensable gases that come from the containment and from the gases produced by cladding/steam interaction during a severe accident. These noncondensable gases degrade the heat transfer capabilities of the condensers in the passive containment cooling systems since they provide a heat transfer resistance to the condensation process. There has been some work done in the area of modeling condensation heat transfer with noncondensable gases, but little has been done to apply the work to integral facilities. It is important to fully understand the heal transfer capabilities of the passive systems so a detailed assessment of the long term cooling capabilities can be performed. The existing correlations and models are for the through-flow of the mixture of steam and the noncondensable gases. This type of analysis may not be applicable to passive containment cooling systems, where there is no clear passage for the steam to escape. This allows the steam to accumulate in the lower header and tubes, where all of the steam condenses. The objective of this work was to develop a condensation heat transfer model for the downward cocurrent flow of a steam/air mixture through a condenser tube, taking into account the atypical characteristics of the passive containment cooling system. An empirical model was developed that depends solely on the inlet conditions to the condenser system, including the mixture Reynolds number and noncondensable gas concentration. This empirical model is applicable to the condensation heat transfer of the passive containment cooling system. This study was also used to characterize the local heat transfer coefficient with a noncondensable gas present.

  13. Multi-dimensional admittance spectroscopy

    NASA Astrophysics Data System (ADS)

    Wieland, K.; Vasko, A.; Karpov, V. G.

    2013-01-01

    We introduce the concept of multi-dimensional admittance spectroscopy capable of characterizing thin-film diode structures in both the (standard) transversal and lateral directions. This extends the capabilities of standard admittance spectroscopy based on the model of leaky capacitor with area defined by the metal contacts. In our approach, the ac signal spreads in the lateral directions far beyond the contact area. The spreading range defines the area of the effective capacitor determining the measured capacitance and conductance. It depends on the ac signal frequency, dc bias, and various structure parameters. A phenomenological description of these dependencies here is verified numerically using our original software to model the distributed admittance via finite element circuits. We analyze the case of photovoltaic devices and show how the multi-dimensional admittance spectroscopy is sensitive to lateral nonuniformity of the system, particularly to the presence of shunts and weak diodes and their location. In addition, the proposed characterization provides information about the system lump parameters, such as sheet resistance, shunt resistance, and open circuit voltage.

  14. Multi-dimensional laser radars

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl; Steinvall, Ove

    2014-06-01

    We introduce the term "multi-dimensional laser radar", where the dimensions mean not only the coordinates of the object in space, but its velocity and orientation, parameters of the media: scattering, refraction, temperature, humidity, wind velocity, etc. The parameters can change in time and can be combined. For example, rendezvous and docking missions, autonomous planetary landing, along with laser ranging, laser altimetry, laser Doppler velocimetry, are thought to have aboard also the 3D ladar imaging. Operating in combinations, they provide more accurate and safer navigation, docking or landing, hazard avoidance capabilities. Combination with Doppler-based measurements provides more accurate navigation for both space and cruise missile applications. Critical is the information identifying the snipers based on combination of polarization and fluctuation parameters with data from other sources. Combination of thermal imaging and vibrometry can unveil the functionality of detected targets. Hyperspectral probing with laser reveals even more parameters. Different algorithms and architectures of ladar-based target acquisition, reconstruction of 3D images from point cloud, information fusion and displaying is discussed with special attention to the technologies of flash illumination and single-photon focal-plane-array detection.

  15. Psychometric Properties and Validity of a Multi-dimensional Risk Perception Scale Developed in the Context of a Microbicide Acceptability Study.

    PubMed

    Vargas, Sara E; Fava, Joseph L; Severy, Lawrence; Rosen, Rochelle K; Salomon, Liz; Shulman, Lawrence; Guthrie, Kate Morrow

    2016-02-01

    Currently available risk perception scales tend to focus on risk behaviors and overall risk (vs partner-specific risk). While these types of assessments may be useful in clinical contexts, they may be inadequate for understanding the relationship between sexual risk and motivations to engage in safer sex or one's willingness to use prevention products during a specific sexual encounter. We present the psychometric evaluation and validation of a scale that includes both general and specific dimensions of sexual risk perception. A one-time, audio computer-assisted self-interview was administered to 531 women aged 18-55 years. Items assessing sexual risk perceptions, both in general and in regards to a specific partner, were examined in the context of a larger study of willingness to use HIV/STD prevention products and preferences for specific product characteristics. Exploratory and confirmatory factor analyses yielded two subscales: general perceived risk and partner-specific perceived risk. Validity analyses demonstrated that the two subscales were related to many sociodemographic and relationship factors. We suggest that this risk perception scale may be useful in research settings where the outcomes of interest are related to motivations to use HIV and STD prevention products and/or product acceptability. Further, we provide specific guidance on how this risk perception scale might be utilized to understand such motivations with one or more specific partners. PMID:26621151

  16. Visualisation of synchronous firing in multi-dimensional spike trains.

    PubMed

    Stuart, L; Walter, M; Borisyuk, R

    2002-01-01

    The gravity transform algorithm is used to study the dependencies in firing of multi-dimensional spike trains. The pros and cons of this algorithm are discussed and the necessity for improved representation of output data is demonstrated. Parallel coordinates are introduced to visualise the results of the gravity transform and principal component analysis (PCA) is used to reduce the quantity of data represented whilst minimising loss of information. PMID:12459307

  17. Multi-dimensional shock capturing

    NASA Astrophysics Data System (ADS)

    Slemrod, Marshall

    The author has worked on two aspects of multidimensional shock capturing. The first project has been a multifaceted effort to understand dynamic liquid-vapor interface propagation from a kinetic point of view. The phenomenon was modeled via a Boltzmann like cluster dynamics model. Clusters represent groupings of molecules of various cluster sizes which can collide elastically and inelastically. The inelastic collisions can produce coagulation of clusters or fragmentation of a cluster. A fluid made of only small cluster sizes would represent a dilute vapor while one containing very large cluster sizes would be a metastable supersaturated vapor. The model via various scaling limits gives sets of equations describing vapor flow in various transition regimes. Numerical experiments were performed modeling vapor to saturated vapor phase change encountered when a dilute vapor encounters a rigid wall.

  18. Global existence for the multi-dimensional compressible viscoelastic flows

    NASA Astrophysics Data System (ADS)

    Hu, Xianpeng; Wang, Dehua

    The global solutions in critical spaces to the multi-dimensional compressible viscoelastic flows are considered. The global existence of the Cauchy problem with initial data close to an equilibrium state is established in Besov spaces. Using uniform estimates for a hyperbolic-parabolic linear system with convection terms, we prove the global existence in the Besov space which is invariant with respect to the scaling of the associated equations. Several important estimates are achieved, including a smoothing effect on the velocity, and the L-decay of the density and deformation gradient.

  19. On Multi-Dimensional Unstructured Mesh Adaption

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    1999-01-01

    Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.

  20. Extended Darknet: Multi-Dimensional Internet Threat Monitoring System

    NASA Astrophysics Data System (ADS)

    Shimoda, Akihiro; Mori, Tatsuya; Goto, Shigeki

    Internet threats caused by botnets/worms are one of the most important security issues to be addressed. Darknet, also called a dark IP address space, is one of the best solutions for monitoring anomalous packets sent by malicious software. However, since darknet is deployed only on an inactive IP address space, it is an inefficient way for monitoring a working network that has a considerable number of active IP addresses. The present paper addresses this problem. We propose a scalable, light-weight malicious packet monitoring system based on a multi-dimensional IP/port analysis. Our system significantly extends the monitoring scope of darknet. In order to extend the capacity of darknet, our approach leverages the active IP address space without affecting legitimate traffic. Multi-dimensional monitoring enables the monitoring of TCP ports with firewalls enabled on each of the IP addresses. We focus on delays of TCP syn/ack responses in the traffic. We locate syn/ack delayed packets and forward them to sensors or honeypots for further analysis. We also propose a policy-based flow classification and forwarding mechanism and develop a prototype of a monitoring system that implements our proposed architecture. We deploy our system on a campus network and perform several experiments for the evaluation of our system. We verify that our system can cover 89% of the IP addresses while darknet-based monitoring only covers 46%. On our campus network, our system monitors twice as many IP addresses as darknet.

  1. The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Gaffney, Richard L., Jr.

    2007-01-01

    The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.

  2. The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Gaffney, R. L.

    2007-01-01

    The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.

  3. A Brief Multi-Dimensional Children's Level-of-Functioning Tool.

    ERIC Educational Resources Information Center

    Srebnik, Debra

    This paper discusses the results of a study that investigated the validity and reliability of the Ecology Rating Scale (ERS). The ERS is a brief, multi-dimensional level-of-functioning instrument that can be rated by parents or clinicians. The ERS is comprised of seven domains of youth functioning: family, school, emotional, legal/justice,

  4. Continuous Energy, Multi-Dimensional Transport Calculations for Problem Dependent Resonance Self-Shielding

    SciTech Connect

    T. Downar

    2009-03-31

    The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.

  5. Vlasov multi-dimensional model dispersion relation

    SciTech Connect

    Lushnikov, Pavel M.; Rose, Harvey A.; Silantyev, Denis A.; Vladimirova, Natalia

    2014-07-15

    A hybrid model of the Vlasov equation in multiple spatial dimension D > 1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like θ{sup N}, where θ is the polar angle and flows are arranged uniformly over the azimuthal angle.

  6. Methodological Issues in Developing a Multi-Dimensional Coding Procedure for Small-Group Chat Communication

    ERIC Educational Resources Information Center

    Strijbos, Jan-Willem; Stahl, Gerry

    2007-01-01

    In CSCL research, collaboration through chat has primarily been studied in dyadic settings. This article discusses three issues that emerged during the development of a multi-dimensional coding procedure for small-group chat communication: (a) the unit of analysis and unit fragmentation, (b) the reconstruction of the response structure and (c)

  7. Multi-dimensional modelling of gas turbine combustion using a flame sheet model in KIVA II

    NASA Technical Reports Server (NTRS)

    Cheng, W. K.; Lai, M.-C.; Chue, T.-H.

    1991-01-01

    A flame sheet model for heat release is incorporated into a multi-dimensional fluid mechanical simulation for gas turbine application. The model assumes that the chemical reaction takes place in thin sheets compared to the length scale of mixing, which is valid for the primary combustion zone in a gas turbine combustor. In this paper, the details of the model are described and computational results are discussed.

  8. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  9. High-value energy storage for the grid: a multi-dimensional look

    SciTech Connect

    Culver, Walter J.

    2010-12-15

    The conceptual attractiveness of energy storage in the electrical power grid has grown in recent years with Smart Grid initiatives. But cost is a problem, interwoven with the complexity of quantifying the benefits of energy storage. This analysis builds toward a multi-dimensional picture of storage that is offered as a step toward identifying and removing the gaps and ''friction'' that permeate the delivery chain from research laboratory to grid deployment. (author)

  10. Analysis of microvascular perfusion with multi-dimensional complete ensemble empirical mode decomposition with adaptive noise algorithm: Processing of laser speckle contrast images recorded in healthy subjects, at rest and during acetylcholine stimulation.

    PubMed

    Humeau-Heurtier, Anne; Marche, Pauline; Dubois, Severine; Mahe, Guillaume

    2015-08-01

    Laser speckle contrast imaging (LSCI) is a full-field imaging modality to monitor microvascular blood flow. It is able to give images with high temporal and spatial resolutions. However, when the skin is studied, the interpretation of the bidimensional data may be difficult. This is why an averaging of the perfusion values in regions of interest is often performed and the result is followed in time, reducing the data to monodimensional time series. In order to avoid such a procedure (that leads to a loss of the spatial resolution), we propose to extract patterns from LSCI data and to compare these patterns for two physiological states in healthy subjects: at rest and at the peak of acetylcholine-induced perfusion peak. For this purpose, the recent multi-dimensional complete ensemble empirical mode decomposition with adaptive noise (MCEEMDAN) algorithm is applied to LSCI data. The results show that the intrinsic mode functions and residue given by MCEEMDAN show different patterns for the two physiological states. The images, as bidimensional data, can therefore be processed to reveal microvascular perfusion patterns, hidden in the images themselves. This work is therefore a feasibility study before analyzing data in patients with microvascular dysfunctions. PMID:26737994

  11. The Multi-Dimensional Demands of Reading in the Disciplines

    ERIC Educational Resources Information Center

    Lee, Carol D.

    2014-01-01

    This commentary addresses the complexities of reading comprehension with an explicit focus on reading in the disciplines. The author proposes reading as entailing multi-dimensional demands of the reader and posing complex challenges for teachers. These challenges are intensified by restrictive conceptions of relevant prior knowledge and experience

  12. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  13. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  14. Dimensionality Reduction on Multi-Dimensional Transfer Functions for Multi-Channel Volume Data Sets

    PubMed Central

    Kim, Han Suk; Schulze, Jrgen P.; Cone, Angela C.; Sosinsky, Gina E.; Martone, Maryann E.

    2011-01-01

    The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets. PMID:21841914

  15. Towards Semantic Web Services on Large, Multi-Dimensional Coverages

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2009-04-01

    Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), HDF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context

  16. Multi-Dimensional Perception of Parental Involvement

    ERIC Educational Resources Information Center

    Fisher, Yael

    2016-01-01

    The main purpose of this study was to define and conceptualize the term parental involvement. A questionnaire was administrated to parents (140), teachers (145), students (120) and high ranking civil servants in the Ministry of Education (30). Responses were analyzed through Smallest Space Analysis (SSA). The SSA solution among all groups rendered

  17. Advanced numerics for multi-dimensional fluid flow calculations

    SciTech Connect

    Vanka, S.P.

    1984-04-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  18. Advanced numerics for multi-dimensional fluid flow calculations

    NASA Technical Reports Server (NTRS)

    Vanka, S. P.

    1984-01-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  19. Efficient Subtorus Processor Allocation in a Multi-Dimensional Torus

    SciTech Connect

    Weizhen Mao; Jie Chen; William Watson

    2005-11-30

    Processor allocation in a mesh or torus connected multicomputer system with up to three dimensions is a hard problem that has received some research attention in the past decade. With the recent deployment of multicomputer systems with a torus topology of dimensions higher than three, which are used to solve complex problems arising in scientific computing, it becomes imminent to study the problem of allocating processors of the configuration of a torus in a multi-dimensional torus connected system. In this paper, we first define the concept of a semitorus. We present two partition schemes, the Equal Partition (EP) and the Non-Equal Partition (NEP), that partition a multi-dimensional semitorus into a set of sub-semitori. We then propose two processor allocation algorithms based on these partition schemes. We evaluate our algorithms by incorporating them in commonly used FCFS and backfilling scheduling policies and conducting simulation using workload traces from the Parallel Workloads Archive. Specifically, our simulation experiments compare four algorithm combinations, FCFS/EP, FCFS/NEP, backfilling/EP, and backfilling/NEP, for two existing multi-dimensional torus connected systems. The simulation results show that our algorithms (especially the backfilling/NEP combination) are capable of producing schedules with system utilization and mean job bounded slowdowns comparable to those in a fully connected multicomputer.

  20. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  1. Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Kwak, Dochan (Technical Monitor)

    1995-01-01

    The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.

  2. Portable laser synthesizer for high-speed multi-dimensional spectroscopy

    DOEpatents

    Demos, Stavros G. (Livermore, CA); Shverdin, Miroslav Y. (Sunnyvale, CA); Shirk, Michael D. (Brentwood, CA)

    2012-05-29

    Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.

  3. Multifunction myoelectric control using multi-dimensional dynamic time warping.

    PubMed

    AbdelMaseeh, Meena; Tsu-Wei Chen; Stashuk, Daniel

    2014-01-01

    Myoelectric control can be used for a variety of applications including powered protheses and different human computer interface systems. The aim of this study is to investigate the formulation of myoelectric control as a multi-class distance-based classification of multidimensional sequences. More specifically, we investigate (1) estimation of multi-muscle activation sequences from multi-channel electromyographic signals in an online manner, and (2) classification using a distance metric based on multi-dimensional dynamic time warping. Subject-specific results across 5 subjects executing 10 different hand movements showed an accuracy of 95% using offline extracted trajectories and an accuracy of 84% using online extracted trajectories. PMID:25570959

  4. An architecture for multi-dimensional temporal abstraction and its application to support neonatal intensive care.

    PubMed

    Stacey, Michael; McGregor, Carolyn; Tracy, Mark

    2007-01-01

    Temporal abstraction (TA) provides the means to instil domain knowledge into data analysis processes and allows transformation of low level numeric data to high level qualitative narratives. TA mechanisms have been primarily applied to uni-dimensional data sources equating to single patients in the clinical context. This paper presents a framework for multi-dimensional TA (MDTA) enabling analysis of data emanating from numerous patients to detect multiple conditions within the environment of neonatal intensive care. Patient agents which perform temporal reasoning upon patient data streams are based on the Event Calculus and an active ontology provides a central knowledge core where rules are stored and agent responses accumulated, thus permitting a level of multi-dimensionality within data abstraction processes. Facilitation of TA across a ward of patients offers the potential for early detection of debilitating conditions such as Sepsis, Pneumothorax and Periventricular Leukomalacia (PVL), which have been shown to exhibit advance indicators in physiological data. Preliminary prototyping for patient agents has begun with promising results and a schema for the active rule repository outlined. PMID:18002814

  5. Flexible multi-dimensional modulation method for elastic optical networks

    NASA Astrophysics Data System (ADS)

    He, Zilong; Liu, Wentao; Shi, Sheping; Shen, Bailin; Chen, Xue; Gao, Xiqing; Zhang, Qi; Shang, Dongdong; Ji, Yongning; Liu, Yingfeng

    2016-01-01

    We demonstrate a flexible multi-dimensional modulation method for elastic optical networks. We compare the flexible multi-dimensional modulation formats PM-kSC-mQAM with traditional modulation formats PM-mQAM using numerical simulations in back-to-back and wavelength division multiplexed (WDM) transmission (50 GHz-spaced) scenarios at the same symbol rate of 32 Gbaud. The simulation results show that PM-kSC-QPSK and PM-kSC-16QAM can achieve obvious back-to-back sensitivity gain with respect to PM-QPSK and PM-16QAM at the expense of spectral efficiency reduction. And the WDM transmission simulation results show that PM-2SC-QPSK can achieve 57.5% increase in transmission reach compared to PM-QPSK, and 48.5% increase for PM-2SC-16QAM over PM-16QAM. Furthermore, we also experimentally investigate the back to back performance of PM-2SC-QPSK, PM-4SC-QPSK, PM-2SC-16QAM and PM-3SC-16QAM, and the experimental results agree well with the numerical simulations.

  6. Construction of Multi-Dimensional Periodic Complementary Array Sets

    NASA Astrophysics Data System (ADS)

    Zeng, Fanxin; Zhang, Zhenyu

    Multi-dimensional (MD) periodic complementary array sets (CASs) with impulse-like MD periodic autocorrelation function are naturally generalized to (one dimensional) periodic complementary sequence sets, and such array sets are widely applied to communication, radar, sonar, coded aperture imaging, and so forth. In this letter, based on multi-dimensional perfect arrays (MD PAs), a method for constructing MD periodic CASs is presented, which is carried out by sampling MD PAs. It is particularly worth mentioning that the numbers and sizes of sub-arrays in the proposed MD periodic CASs can be freely changed within the range of possibilities. In particular, for arbitrarily given positive integers M and L, two-dimensional periodic polyphase CASs with the number M2 and size L L of sub-arrays can be produced by the proposed method. And analogously, pseudo-random MD periodic CASs can be given when pseudo-random MD arrays are sampled. Finally, the proposed method's validity is made sure by a given example.

  7. Evaluation of steady noise from a multi-dimensional point of view

    NASA Astrophysics Data System (ADS)

    Takeshima, H.; Suzuki, Y.; Sone, T.

    1991-12-01

    This paper describes the results of a study aimed at establishing a method of multi-dimensional noise evaluation. A description of the total negative impression of noise is obtained through subjective experiments with steady and almost steady sounds. The authors tried to determine Japanese descriptive adjectives independent of " ookii" (loud) in order to assess the total negative impression of noise. To do this, the authors introduced "degree of absolute noisiness" for evaluation of noise and measured it directly. Major conclusions are as follows: (1) at least two dimensions are needed to describe the total negative impression of noise; (2) one of the two dimensions is highly correlated with the Japanese adjective which means loud, and the other axis is correlated with the Japanese descriptive adjectives meaning harsh and unpleasant; (3) the scale "loud" correlates well with Zwicker's loudness and the scale "harsh" correlates highly with Bismarck's sharpness.

  8. Multi-Dimensional Damage Detection for Surfaces and Structures

    NASA Technical Reports Server (NTRS)

    Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah

    2013-01-01

    Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or sensory system, which may include a time domain reflectometer, resistivity monitoring hardware, or other resistance-based systems. To begin, a layered composite consisting of thin-film damage detection layers separated by non-damage detection layers is fabricated. The damage detection layers are attached to a detector that provides details regarding the physical health of each detection layer individually. If damage occurs to any of the detection layers, a change in the electrical properties of the detection layers damaged occurs, and a response is generated. Real-time analysis of these responses will provide details regarding the depth, location, and size estimation of the damage. Multiple damages can be detected, and the extent (depth) of the damage can be used to generate prognostic information related to the expected lifetime of the layered composite system. The detection system can be fabricated very easily using off-the-shelf equipment, and the detection algorithms can be written and updated (as needed) to provide the level of detail needed based on the system being monitored. Connecting to the thin film detection layers is very easy as well. The truly unique feature of the system is its flexibility; the system can be designed to gather as much (or as little) information as the end user feels necessary. Individual detection layers can be turned on or off as necessary, and algorithms can be used to optimize performance. The system can be used to generate both diagnostic and prognostic information related to the health of layer composite structures, which will be essential if such systems are utilized for space exploration. The technology is also applicable to other in-situ health monitoring systems for structure integrity.

  9. Multi-dimensional structure of accreting young stars

    NASA Astrophysics Data System (ADS)

    Geroux, C.; Baraffe, I.; Viallet, M.; Goffrey, T.; Pratt, J.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.

    2016-04-01

    This work is the first attempt to describe the multi-dimensional structure of accreting young stars based on fully compressible time implicit multi-dimensional hydrodynamics simulations. One major motivation is to analyse the validity of accretion treatment used in previous 1D stellar evolution studies. We analyse the effect of accretion on the structure of a realistic stellar model of the young Sun. Our work is inspired by the numerical work of Kley & Lin (1996, ApJ, 461, 933) devoted to the structure of the boundary layer in accretion disks, which provides the outer boundary conditions for our simulations. We analyse the redistribution of accreted material with a range of values of specific entropy relative to the bulk specific entropy of the material in the accreting object's convective envelope. Low specific entropy accreted material characterises the so-called cold accretion process, whereas high specific entropy is relevant to hot accretion. A primary goal is to understand whether and how accreted energy deposited onto a stellar surface is redistributed in the interior. This study focusses on the high accretion rates characteristic of FU Ori systems. We find that the highest entropy cases produce a distinctive behaviour in the mass redistribution, rms velocities, and enthalpy flux in the convective envelope. This change in behaviour is characterised by the formation of a hot layer on the surface of the accreting object, which tends to suppress convection in the envelope. We analyse the long-term effect of such a hot buffer zone on the structure and evolution of the accreting object with 1D stellar evolution calculations. We study the relevance of the assumption of redistribution of accreted energy into the stellar interior used in the literature. We compare results obtained with the latter treatment and those obtained with a more physical accretion boundary condition based on the formation of a hot surface layer suggested by present multi-dimensional simulations. One conclusion is that, for a given amount of accreted energy transferred to the accreting object, a treatment assuming accretion energy redistribution throughout the stellar interior could significantly overestimate the effects on the stellar structure and, in particular, on the resulting expansion.

  10. A Multi-Dimensional Instrument for Evaluating Taiwanese High School Students' Science Learning Self-Efficacy in Relation to Their Approaches to Learning Science

    ERIC Educational Resources Information Center

    Lin, Tzung-Jin; Tsai, Chin-Chung

    2013-01-01

    In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the…

  11. A Multi-Dimensional Instrument for Evaluating Taiwanese High School Students' Science Learning Self-Efficacy in Relation to Their Approaches to Learning Science

    ERIC Educational Resources Information Center

    Lin, Tzung-Jin; Tsai, Chin-Chung

    2013-01-01

    In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the

  12. A Multi-Dimensional Classification Model for Scientific Workflow Characteristics

    SciTech Connect

    Ramakrishnan, Lavanya; Plale, Beth

    2010-04-05

    Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.

  13. Multi-dimensional Longwave Forcing of Boundary Layer Cloud Systems

    SciTech Connect

    Mechem, David B.; Kogan, Y. L.; Ovtchinnikov, Mikhail; Davis, Anthony B; Evans, K. F.; Ellingson, Robert G.

    2008-12-20

    The importance of multi-dimensional (MD) longwave radiative effects on cloud dynamics is evaluated in a large eddy simulation (LES) framework employing multi-dimensional radiative transfer (Spherical Harmonics Discrete Ordinate Method —SHDOM). Simulations are performed for a case of unbroken, marine boundary layer stratocumulus and a broken field of trade cumulus. “Snapshot” calculations of MD and IPA (independent pixel approximation —1D) radiative transfer applied to LES cloud fields show that the total radiative forcing changes only slightly, although the MD effects significantly modify the spatial structure of the radiative forcing. Simulations of each cloud type employing MD and IPA radiative transfer, however, differ little. For the solid cloud case, relative to using IPA, the MD simulation exhibits a slight reduction in entrainment rate and boundary layer TKE relative to the IPA simulation. This reduction is consistent with both the slight decrease in net radiative forcing and a negative correlation between local vertical velocity and radiative forcing, which implies a damping of boundary layer eddies. Snapshot calculations of the broken cloud case suggest a slight increase in radiative cooling, though few systematic differences are noted in the interactive simulations. We attribute this result to the fact that radiative cooling is a relatively minor contribution to the total energetics. For the cloud systems in this study, the use of IPA longwave radiative transfer is sufficiently accurate to capture the dynamical behavior of BL clouds. Further investigations are required in order to generalize this conclusion for other cloud types and longer time integrations. 1

  14. On Multi-Dimensional Vocabulary Teaching Mode for College English Teaching

    ERIC Educational Resources Information Center

    Zhou, Li-na

    2010-01-01

    This paper analyses the major approaches in EFL (English as a Foreign Language) vocabulary teaching from historical perspective and puts forward multi-dimensional vocabulary teaching mode for college English. The author stresses that multi-dimensional approaches of communicative vocabulary teaching, lexical phrase teaching method, the grammar…

  15. Wildfire Detection using by Multi Dimensional Histogram in Boreal Forest

    NASA Astrophysics Data System (ADS)

    Honda, K.; Kimura, K.; Honma, T.

    2008-12-01

    Early detection of wildfires is an issue for reduction of damage to environment and human. There are some attempts to detect wildfires by using satellite imagery, which are mainly classified into three methods: Dozier Method(1981-), Threshold Method(1986-) and Contextual Method(1994-). However, the accuracy of these methods is not enough: some commission and omission errors are included in the detected results. In addition, it is not so easy to analyze satellite imagery with high accuracy because of insufficient ground truth data. Kudoh and Hosoi (2003) developed the detection method by using three-dimensional (3D) histogram from past fire data with the NOAA-AVHRR imagery. But their method is impractical because their method depends on their handworks to pick up past fire data from huge data. Therefore, the purpose of this study is to collect fire points as hot spots efficiently from satellite imagery and to improve the method to detect wildfires with the collected data. As our method, we collect past fire data with the Alaska Fire History data obtained by the Alaska Fire Service (AFS). We select points that are expected to be wildfires, and pick up the points inside the fire area of the AFS data. Next, we make 3D histogram with the past fire data. In this study, we use Bands 1, 21 and 32 of MODIS. We calculate the likelihood to detect wildfires with the three-dimensional histogram. As our result, we select wildfires with the 3D histogram effectively. We can detect the troidally spreading wildfire. This result shows the evidence of good wildfire detection. However, the area surrounding glacier tends to rise brightness temperature. It is a false alarm. Burnt area and bare ground are sometimes indicated as false alarms, so that it is necessary to improve this method. Additionally, we are trying various combinations of MODIS bands as the better method to detect wildfire effectively. So as to adjust our method in another area, we are applying our method to tropical forest in Kalimantan, Indonesia and around Chiang Mai, Thailand. But the ground truth data in these areas is lesser than the one in Alaska. Our method needs lots of accurate observed data to make multi-dimensional histogram in the same area. In this study, we can show the system to select wildfire data efficiently from satellite imagery. Furthermore, the development of multi-dimensional histogram from past fire data makes it possible to detect wildfires accurately.

  16. Multi-Dimensional Analysis of Dynamic Human Information Interaction

    ERIC Educational Resources Information Center

    Park, Minsoo

    2013-01-01

    Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…

  17. Multi-Dimensional Dynamics of Human Electromagnetic Brain Activity

    PubMed Central

    Kida, Tetsuo; Tanaka, Emi; Kakigi, Ryusuke

    2016-01-01

    Magnetoencephalography (MEG) and electroencephalography (EEG) are invaluable neuroscientific tools for unveiling human neural dynamics in three dimensions (space, time, and frequency), which are associated with a wide variety of perceptions, cognition, and actions. MEG/EEG also provides different categories of neuronal indices including activity magnitude, connectivity, and network properties along the three dimensions. In the last 20 years, interest has increased in inter-regional connectivity and complex network properties assessed by various sophisticated scientific analyses. We herein review the definition, computation, short history, and pros and cons of connectivity and complex network (graph-theory) analyses applied to MEG/EEG signals. We briefly describe recent developments in source reconstruction algorithms essential for source-space connectivity and network analyses. Furthermore, we discuss a relatively novel approach used in MEG/EEG studies to examine the complex dynamics represented by human brain activity. The correct and effective use of these neuronal metrics provides a new insight into the multi-dimensional dynamics of the neural representations of various functions in the complex human brain. PMID:26834608

  18. Multi-Dimensional Dynamics of Human Electromagnetic Brain Activity.

    PubMed

    Kida, Tetsuo; Tanaka, Emi; Kakigi, Ryusuke

    2015-01-01

    Magnetoencephalography (MEG) and electroencephalography (EEG) are invaluable neuroscientific tools for unveiling human neural dynamics in three dimensions (space, time, and frequency), which are associated with a wide variety of perceptions, cognition, and actions. MEG/EEG also provides different categories of neuronal indices including activity magnitude, connectivity, and network properties along the three dimensions. In the last 20 years, interest has increased in inter-regional connectivity and complex network properties assessed by various sophisticated scientific analyses. We herein review the definition, computation, short history, and pros and cons of connectivity and complex network (graph-theory) analyses applied to MEG/EEG signals. We briefly describe recent developments in source reconstruction algorithms essential for source-space connectivity and network analyses. Furthermore, we discuss a relatively novel approach used in MEG/EEG studies to examine the complex dynamics represented by human brain activity. The correct and effective use of these neuronal metrics provides a new insight into the multi-dimensional dynamics of the neural representations of various functions in the complex human brain. PMID:26834608

  19. Spiritual Competency Scale: Further Analysis

    ERIC Educational Resources Information Center

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC

  20. Development of a Scale Measuring Trait Anxiety in Physical Education

    ERIC Educational Resources Information Center

    Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos

    2012-01-01

    The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the…

  1. Development of a Scale Measuring Trait Anxiety in Physical Education

    ERIC Educational Resources Information Center

    Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos

    2012-01-01

    The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the

  2. Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.

    2014-01-01

    Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved movies. Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.

  3. Comparative application and analysis from a one dimensional and a multi-dimensional routing scheme and its impact on process oriented hydrological modeling with the Jena Adaptable Modelling System (JAMS) and the integrated hydrological, nutrient transport and erosion modeling system J2000-S-E

    NASA Astrophysics Data System (ADS)

    Kipka, H.; Pfennig, B.; Fink, M.; Kralisch, S.; Krause, P.; Flgel, W.

    2010-12-01

    Fully spatially distributed hydrological modeling requires a topological linkage of single modeling entities (e.g. Hydrological Response Units - HRU) in order to reproduce relevant attenuation and translation processes within the stream but also during the transport of water in form of lateral surface or subsurface flow. Most often such linkage is considered by a one dimensional (1D) approach which links one modeling entity to only one receiver that follows in flow direction. The comparison with actual lateral water movement in catchments show that such a 1D routing scheme is often too simple which can lead to an overestimation of the runoff concentration along the 1D flow paths. On the other hand an underestimation of runoff in flow cascades that do not reside next to the main 1D flow paths can occur as the affected HRUs dont receive realistic inflow from their source entities above. As a catchment-wide consequence the 1D routing scheme can result in a significant over- or underestimation of the contributing area for specific parts of a catchment which can have important implications on the spatial distribution of accompanying processes such as spatial variation of soil moisture, soil erosion or nutrient/contaminant transport. To address the problems outlined above a new approach has been developed that allows a multi-dimensional linkage of model entities in such a way that each entity can have various receivers to which the water is passed. This extended routing scheme was implemented in the hydrological, nutrient transport and erosion modeling system J2000-S-E and was used for the simulation of the hydrological processes of a number of meso-scaled catchments in Thuringia, Germany. This work will present the most important facts of the extended routing scheme, the simulation results along with the comparison of those obtained with the 1D linkage and will highlight the impacts on the hydrological process dynamics as well as on the HRU-based mass transport and balancing.

  4. Chemistry and Transport in a Multi-Dimensional Model

    NASA Technical Reports Server (NTRS)

    Yung, Yuk L.

    2004-01-01

    Our work has two primary scientific goals, the interannual variability (IAV) of stratospheric ozone and the hydrological cycle of the upper troposphere and lower stratosphere. Our efforts are aimed at integrating new information obtained by spacecraft and aircraft measurements to achieve a better understanding of the chemical and dynamical processes that are needed for realistic evaluations of human impact on the global environment. A primary motivation for studying the ozone layer is to separate the anthropogenic perturbations of the ozone layer from natural variability. Using the recently available merged ozone data (MOD), we have carried out an empirical orthogonal function EOF) study of the temporal and spatial patterns of the IAV of total column ozone in the tropics. The outstanding problem about water in the stratosphere is its secular increase in the last few decades. The Caltech/PL multi-dimensional chemical transport model (CTM) photochemical model is used to simulate the processes that control the water vapor and its isotopic composition in the stratosphere. Datasets we will use for comparison with model results include those obtained by the Total Ozone Mapping Spectrometer (TOMS), the Solar Backscatter Ultraviolet (SBUV and SBUV/2), Stratosphere Aerosol and Gas Experiment (SAGE I and II), the Halogen Occultation Experiment (HALOE), the Atmospheric Trace Molecular Spectroscopy (ATMOS) and those soon to be obtained by the Cirrus Regional Study of Tropical Anvils and Cirrus Layers Florida Area Cirrus Experiment (CRYSTAL-FACE) mission. The focus of the investigations is the exchange between the stratosphere and the troposphere, and between the troposphere and the biosphere.

  5. The Edinburgh Postnatal Depression Scale: Screening Tool for Postpartum Anxiety as Well? Findings from a Confirmatory Factor Analysis of the Hebrew Version.

    PubMed

    Bina, Rena; Harrington, Donna

    2016-04-01

    Objectives The Edinburgh Postnatal Depression Scale (EPDS) was originally created as a uni-dimensional scale to screen for postpartum depression (PPD); however, evidence from various studies suggests that it is a multi-dimensional scale measuring mainly anxiety in addition to depression. The factor structure of the EPDS seems to differ across various language translations, raising questions regarding its stability. This study examined the factor structure of the Hebrew version of the EPDS to assess whether it is uni- or multi-dimensional. Methods Seven hundred and fifteen (n = 715) women were screened at 6 weeks postpartum using the Hebrew version of the EPDS. Confirmatory factor analysis (CFA) was used to test four models derived from the literature. Results Of the four CFA models tested, a 9-item two factor model fit the data best, with one factor representing an underlying depression construct and the other representing an underlying anxiety construct. Conclusions for Practice The Hebrew version of the EPDS appears to consist of depression and anxiety sub-scales. Given the widespread PPD screening initiatives, anxiety symptoms should be addressed in addition to depressive symptoms, and a short scale, such as the EPDS, assessing both may be efficient. PMID:26649883

  6. All-optical multi-dimensional imaging of energy-materials beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Smith, Steve; Dagel, D. J.; Zhong, L.; Kolla, P.; Ding, S.-Y.

    2011-09-01

    Efficient, environmentally-friendly, harvesting, storage, transport and conversion of energy are some of the foremost challenges now facing mankind. An important facet of this challenge is the development of new materials with improved electronic and photonic properties. Nano-scale metrology will be important in developing these materials, and optical methods have many advantages over electrons or proximal probes. To surpass the diffraction limit, near-field methods can be used. Alternatively, the concept of imaging in a multi-dimensional space is employed, where, in addition to spatial dimensions, the added dimensions of energy and time allow to distinguish objects which are closely spaced, and in effect increase the achievable resolution of optical microscopy towards the molecular level. We have employed these methods towards the study of materials relevant to renewable energy processes. Specifically, we image the position and orientation of single carbohydrate binding modules and visualize their interaction with cellulose with ~ 10nm resolution, an important step in identifying the molecular underpinnings of bio-processing and the development of low-cost alternative fuels, and describe our current work implementing these concepts towards characterizing the ultrafast carrier dynamics (~ 100fs) in a new class of nano-structured solar cells, predicted to have theoretical efficiencies exceeding 60%, using femtosecond laser spectroscopy.

  7. All-optical multi-dimensional imaging of energy-materials beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Smith, Steve; Dagel, D. J.; Zhong, L.; Kolla, P.; Ding, S.-Y.

    2012-02-01

    Efficient, environmentally-friendly, harvesting, storage, transport and conversion of energy are some of the foremost challenges now facing mankind. An important facet of this challenge is the development of new materials with improved electronic and photonic properties. Nano-scale metrology will be important in developing these materials, and optical methods have many advantages over electrons or proximal probes. To surpass the diffraction limit, near-field methods can be used. Alternatively, the concept of imaging in a multi-dimensional space is employed, where, in addition to spatial dimensions, the added dimensions of energy and time allow to distinguish objects which are closely spaced, and in effect increase the achievable resolution of optical microscopy towards the molecular level. We have employed these methods towards the study of materials relevant to renewable energy processes. Specifically, we image the position and orientation of single carbohydrate binding modules and visualize their interaction with cellulose with ~ 10nm resolution, an important step in identifying the molecular underpinnings of bio-processing and the development of low-cost alternative fuels, and describe our current work implementing these concepts towards characterizing the ultrafast carrier dynamics (~ 100fs) in a new class of nano-structured solar cells, predicted to have theoretical efficiencies exceeding 60%, using femtosecond laser spectroscopy.

  8. Relaxation-time limit in the multi-dimensional bipolar nonisentropic Euler-Poisson systems

    NASA Astrophysics Data System (ADS)

    Li, Yeping; Zhou, Zhiming

    2015-05-01

    In this paper, we consider the multi-dimensional bipolar nonisentropic Euler-Poisson systems, which model various physical phenomena in semiconductor devices, plasmas and channel proteins. We mainly study the relaxation-time limit of the initial value problem for the bipolar full Euler-Poisson equations with well-prepared initial data. Inspired by the Maxwell iteration, we construct the different approximation states for the case ?? = 1 and ? = 1, respectively, and show that periodic initial-value problems of the certain scaled bipolar nonisentropic Euler-Poisson systems in the case ?? = 1 and ? = 1 have unique smooth solutions in the time interval where the classical energy transport equation and the drift-diffusive equation have smooth solution. Moreover, it is also obtained that the smooth solutions converge to those of energy-transport models at the rate of ?2 and those of the drift-diffusive models at the rate of ?, respectively. The proof of these results is based on the continuation principle and the error estimates.

  9. Multi-dimensional limiting for high-order schemes including turbulence and combustion

    NASA Astrophysics Data System (ADS)

    Gerlinger, Peter

    2012-03-01

    In the present paper a fourth/fifth order upwind biased limiting strategy is presented for the simulation of turbulent flows and combustion. Because high order numerical schemes usually suffer from stability problems and TVD approaches often prevent convergence to machine accuracy the multi-dimensional limiting process (MLP) [1] is employed. MLP uses information from diagonal volumes of a discretization stencil. It interacts with the TVD limiter in such a way, that local extrema at the corner points of the volume are avoided. This stabilizes the numerical scheme and enables convergence in cases, where standard limiters fail to converge. Up to now MLP has been used for inviscid and laminar flows only. In the present paper this technique is applied to fully turbulent sub- and supersonic flows simulated with a low Reynolds-number turbulence closure. Additionally, combustion based on finite-rate chemistry is investigated. An improved MLP version (MLP ld, low diffusion) as well as an analysis of its capabilities and limitations are given. It is demonstrated, that the scheme offers high accuracy and robustness while keeping the computational cost low. Both steady and unsteady test cases are investigated.

  10. Scaling analysis of stock markets.

    PubMed

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis. PMID:24985421

  11. Scaling analysis of stock markets

    NASA Astrophysics Data System (ADS)

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  12. Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy

    PubMed Central

    2011-01-01

    Background Empathy towards patients is considered to be associated with improved health outcomes. Many scales have been developed to measure empathy in health care professionals and students. The Jefferson Scale of Physician Empathy (JSPE) has been widely used. This study was designed to examine the psychometric properties and the theoretical structure of the JSPE. Methods A total of 853 medical students responded to the JSPE questionnaire. A hypothetical model was evaluated by structural equation modelling to determine the adequacy of goodness-of-fit to sample data. Results The model showed excellent goodness-of-fit. Further analysis showed that the hypothesised three-factor model of the JSPE structure fits well across the gender differences of medical students. Conclusions The results supported scale multi-dimensionality. The 20 item JSPE provides a valid and reliable scale to measure empathy among not only undergraduate and graduate medical education programmes, but also practising doctors. The limitations of the study are discussed and some recommendations are made for future practice. PMID:21810268

  13. Towards Optimal Multi-Dimensional Query Processing with BitmapIndices

    SciTech Connect

    Rotem, Doron; Stockinger, Kurt; Wu, Kesheng

    2005-09-30

    Bitmap indices have been widely used in scientific applications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based indices would not work efficiently. This paper studies strategies for minimizing the access costs for processing multi-dimensional queries using bitmap indices with binning. Innovative features of our algorithm include (a) optimally placing the bin boundaries and (b) dynamically reordering the evaluation of the query terms. In addition, we derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.

  14. Multi-dimensional high-order numerical schemes for Lagrangian hydrodynamics

    SciTech Connect

    Dai, William W; Woodward, Paul R

    2009-01-01

    An approximate solver for multi-dimensional Riemann problems at grid points of unstructured meshes, and a numerical scheme for multi-dimensional hydrodynamics have been developed in this paper. The solver is simple, and is developed only for the use in numerical schemes for hydrodynamics. The scheme is truely multi-dimensional, is second order accurate in both space and time, and satisfies conservation laws exactly for mass, momentum, and total energy. The scheme has been tested through numerical examples involving strong shocks. It has been shown that the scheme offers the principle advantages of high-order Codunov schemes; robust operation in the presence of very strong shocks and thin shock fronts.

  15. Multi-dimensional temporal abstraction and data mining of medical time series data: trends and challenges.

    PubMed

    Catley, Christina; Stratti, Heidi; McGregor, Carolyn

    2008-01-01

    This paper presents emerging trends in the area of temporal abstraction and data mining, as applied to multi-dimensional data. The clinical context is that of Neonatal Intensive Care, an acute care environment distinguished by multi-dimensional and high-frequency data. Six key trends are identified and classified into the following categories: (1) data; (2) results; (3) integration; and (4) knowledge base. These trends form the basis of next-generation knowledge discovery in data systems, which must address challenges associated with supporting multi-dimensional and real-world clinical data, as well as null hypothesis testing. Architectural drivers for frameworks that support data mining and temporal abstraction include: process-level integration (i.e. workflow order); synthesized knowledge bases for temporal abstraction which combine knowledge derived from both data mining and domain experts; and system-level integration. PMID:19163669

  16. Steps Toward a Large-Scale Solar Image Data Analysis to Differentiate Solar Phenomena

    NASA Astrophysics Data System (ADS)

    Banda, J. M.; Angryk, R. A.; Martens, P. C. H.

    2013-11-01

    We detail the investigation of the first application of several dissimilarity measures for large-scale solar image data analysis. Using a solar-domain-specific benchmark dataset that contains multiple types of phenomena, we analyzed combinations of image parameters with different dissimilarity measures to determine the combinations that will allow us to differentiate between the multiple solar phenomena from both intra-class and inter-class perspectives, where by class we refer to the same types of solar phenomena. We also investigate the problem of reducing data dimensionality by applying multi-dimensional scaling to the dissimilarity matrices that we produced using the previously mentioned combinations. As an early investigation into dimensionality reduction, we investigate by applying multidimensional scaling (MDS) how many MDS components are needed to maintain a good representation of our data (in a new artificial data space) and how many can be discarded to enhance our querying performance. Finally, we present a comparative analysis of several classifiers to determine the quality of the dimensionality reduction achieved with this combination of image parameters, similarity measures, and MDS.

  17. Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.

    SciTech Connect

    Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick

    2010-06-01

    Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.

  18. Solving Multi-dimensional Evolution Problems with Localized Structures using Second Generation Wavelets

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.

    2003-03-01

    A dynamically adaptive numerical method for solving multi-dimensional evolution problems with localized structures is developed. The method is based on the general class of multi-dimensional second-generation wavelets and is an extension of the second-generation wavelet collocation method of Vasilyev and Bowman to two and higher dimensions and irregular sampling intervals. Wavelet decomposition is used for grid adaptation and interpolation, while O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The prowess and computational efficiency of the method are demonstrated for the solution of a number of two-dimensional test problems.

  19. Mokken Scale Analysis Using Hierarchical Clustering Procedures

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries

    2004-01-01

    Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale

  20. A Multi-Dimensional Approach to Measuring News Media Literacy

    ERIC Educational Resources Information Center

    Vraga, Emily; Tully, Melissa; Kotcher, John E.; Smithson, Anne-Bennett; Broeckelman-Post, Melissa

    2015-01-01

    Measuring news media literacy is important in order for it to thrive in a variety of educational and civic contexts. This research builds on existing measures of news media literacy and two new scales are presented that measure self-perceived media literacy (SPML) and perceptions of the value of media literacy (VML). Research with a larger sample…

  1. Kullback-Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Boughton, Keith A.

    2011-01-01

    This paper first discusses the relationship between Kullback-Leibler information (KL) and Fisher information in the context of multi-dimensional item response theory and is further interpreted for the two-dimensional case, from a geometric perspective. This explication should allow for a better understanding of the various item selection methods

  2. Evidencing Learning Outcomes: A Multi-Level, Multi-Dimensional Course Alignment Model

    ERIC Educational Resources Information Center

    Sridharan, Bhavani; Leitch, Shona; Watty, Kim

    2015-01-01

    This conceptual framework proposes a multi-level, multi-dimensional course alignment model to implement a contextualised constructive alignment of rubric design that authentically evidences and assesses learning outcomes. By embedding quality control mechanisms at each level for each dimension, this model facilitates the development of an aligned

  3. Impact of Malaysian Polytechnics' Head of Department Multi-Dimensional Leadership Orientation towards Lecturers Work Commitment

    ERIC Educational Resources Information Center

    Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd

    2012-01-01

    The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions

  4. Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter

    ERIC Educational Resources Information Center

    Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.

    2010-01-01

    We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students

  5. A combined discontinuous Galerkin and finite volume scheme for multi-dimensional VPFP system

    SciTech Connect

    Asadzadeh, M.; Bartoszek, K.

    2011-05-20

    We construct a numerical scheme for the multi-dimensional Vlasov-Poisson-Fokker-Planck system based on a combined finite volume (FV) method for the Poisson equation in spatial domain and the streamline diffusion (SD) and discontinuous Galerkin (DG) finite element in time, phase-space variables for the Vlasov-Fokker-Planck equation.

  6. Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter

    ERIC Educational Resources Information Center

    Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.

    2010-01-01

    We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…

  7. Developing Multi-Dimensional Evaluation Criteria for English Learning Websites with University Students and Professors

    ERIC Educational Resources Information Center

    Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen

    2011-01-01

    Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These

  8. Impact of Malaysian Polytechnics' Head of Department Multi-Dimensional Leadership Orientation towards Lecturers Work Commitment

    ERIC Educational Resources Information Center

    Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd

    2012-01-01

    The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…

  9. Evidencing Learning Outcomes: A Multi-Level, Multi-Dimensional Course Alignment Model

    ERIC Educational Resources Information Center

    Sridharan, Bhavani; Leitch, Shona; Watty, Kim

    2015-01-01

    This conceptual framework proposes a multi-level, multi-dimensional course alignment model to implement a contextualised constructive alignment of rubric design that authentically evidences and assesses learning outcomes. By embedding quality control mechanisms at each level for each dimension, this model facilitates the development of an aligned…

  10. Stability of shock waves for multi-dimensional hyperbolic-parabolic conservation laws

    NASA Astrophysics Data System (ADS)

    Li, Dening

    1988-01-01

    The uniform linear stability of shock waves is considerd for quasilinear hyperbolic-parabolic coupled conservation laws in multi-dimensional space. As an example, the stability condition and its dynamic meaning for isothermal shock wave in radiative hydrodynamics are analyzed.

  11. Magnetic quantum tunneling: key insights from multi-dimensional high-field EPR.

    PubMed

    Lawrence, J; Yang, E-C; Hendrickson, D N; Hill, S

    2009-08-21

    Multi-dimensional high-field/frequency electron paramagnetic resonance (HFEPR) spectroscopy is performed on single-crystals of the high-symmetry spin S = 4 tetranuclear single-molecule magnet (SMM) [Ni(hmp)(dmb)Cl](4), where hmp(-) is the anion of 2-hydroxymethylpyridine and dmb is 3,3-dimethyl-1-butanol. Measurements performed as a function of the applied magnetic field strength and its orientation within the hard-plane reveal the four-fold behavior associated with the fourth order transverse zero-field splitting (ZFS) interaction, (1/2)B(S + S), within the framework of a rigid spin approximation (with S = 4). This ZFS interaction mixes the m(s) = +/-4 ground states in second order of perturbation, generating a sizeable (12 MHz) tunnel splitting, which explains the fast magnetic quantum tunneling in this SMM. Meanwhile, multi-frequency measurements performed with the field parallel to the easy-axis reveal HFEPR transitions associated with excited spin multiplets (S < 4). Analysis of the temperature dependence of the intensities of these transitions enables determination of the isotropic Heisenberg exchange constant, J = -6.0 cm(-1), which couples the four spin s = 1 Ni(II) ions within the cluster, as well as a characterization of the ZFS within excited states. The combined experimental studies support recent work indicating that the fourth order anisotropy associated with the S = 4 state originates from second order ZFS interactions associated with the individual Ni(II) centers, but only as a result of higher-order processes that occur via S-mixing between the ground state and higher-lying (S < 4) spin multiplets. We argue that this S-mixing plays an important role in the low-temperature quantum dynamics associated with many other well known SMMs. PMID:19639148

  12. Assessment of the RELAP5 multi-dimensional component model using data from LOFT test L2-5

    SciTech Connect

    Davis, C.B.

    1998-01-01

    The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the LOFT L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident. Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L2-5 experiment were performed using the RELAP5-3D Version BF02 computer code. The calculated thermal-hydraulic responses of the LOFT primary and secondary coolant systems were generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP/MOD3.

  13. Multi-dimensional hydrocode analyses of penetrating hypervelocity impacts.

    SciTech Connect

    Saul, W. Venner; Reinhart, William Dodd; Thornhill, Tom Finley, III; Lawrence, Raymond Jeffery Jr.; Chhabildas, Lalit Chandra; Bessette, Gregory Carl

    2003-08-01

    The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.

  14. Hitchhiker's guide to multi-dimensional plant pathology.

    PubMed

    Saunders, Diane G O

    2015-02-01

    Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources. PMID:25729800

  15. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    NASA Astrophysics Data System (ADS)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  16. Multi-dimensional hybrid Fourier continuation-WENO solvers for conservation laws

    NASA Astrophysics Data System (ADS)

    Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu

    2013-11-01

    We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC-WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319-338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC-WENO, the pure WENO and the hybrid central difference-WENO (CD-WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer-Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi-domain strategy. For the same RMI test, the hybrid computations on a single GPU, in double precision arithmetics, displays five- to six-fold speedup over the hybrid computations on a single CPU. The relative speedup of the hybrid computation over the WENO computations on GPUs is similar to that on CPUs, demonstrating the advantage of hybrid schemes technique on both CPUs and GPUs.

  17. Path Integral Approach for Multi-dimensional Polarons in a Symmetric Quantum Dot

    NASA Astrophysics Data System (ADS)

    Chen, Qing-hu; Ren, Yu-hang; Cao, Yi-gang; Jiao, Zheng-kuan

    1998-08-01

    Within the framework of Feynman-Haken path-integral theory, the general expression of the ground-state energy for multi-dimensional polarons in symmetric quantum dots for arbitrary electron-phonon coupling constants are derived. Moreover, in the weak-coupling limit, the previous results by the second-order Rayleigh-Schrödinger perturbative theory are completely recovered. More interestingly, the extended- and localized-state solutions to the ground-state energy are obtained analytically.

  18. Minimizing I/O Costs of Multi-Dimensional Queries with BitmapIndices

    SciTech Connect

    Rotem, Doron; Stockinger, Kurt; Wu, Kesheng

    2006-03-30

    Bitmap indices have been widely used in scientific applications and commercial systems for processing complex,multi-dimensional queries where traditional tree-based indices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality attributes is to group ranges of values of an attribute into bins and then build a bitmap for each bin rather than a bitmap for each value of the attribute. Binning reduces storage costs,however, results of queries based on bins often require additional filtering for discarding it false positives, i.e., records in the result that do not satisfy the query constraints. This additional filtering,also known as ''candidate checking,'' requires access to the base data on disk and involves significant I/O costs. This paper studies strategies for minimizing the I/O costs for ''candidate checking'' for multi-dimensional queries. This is done by determining the number of bins allocated for each dimension and then placing bin boundaries in optimal locations. Our algorithms use knowledge of data distribution and query workload. We derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.

  19. Multi-dimensional Reduction and Transfer Function Design using Parallel Coordinates

    PubMed Central

    Zhao, X.; Kaufman, A.

    2010-01-01

    Multi-dimensional transfer functions are widely used to provide appropriate data classification for direct volume rendering. Nevertheless, the design of a multi-dimensional transfer function is a complicated task. In this paper, we propose to use parallel coordinates, a powerful tool to visualize high-dimensional geometry and analyze multivariate data, for multi-dimensional transfer function design. This approach has two major advantages: (1) Combining the information of spatial space (voxel position) and parameter space; (2) Selecting appropriate high-dimensional parameters to obtain sophisticated data classification. Although parallel coordinates offers simple interface for the user to design the high-dimensional transfer function, some extra work such as sorting the coordinates is inevitable. Therefore, we use a local linear embedding technique for dimension reduction to reduce the burdensome calculations in the high dimensional parameter space and to represent the transfer function concisely. With the aid of parallel coordinates, we propose some novel high-dimensional transfer function widgets for better visualization results. We demonstrate the capability of our parallel coordinates based transfer function (PCbTF) design method for direct volume rendering using CT and MRI datasets. PMID:26278929

  20. Higher symmetries of cotangent coverings for Lax-integrable multi-dimensional partial differential equations and Lagrangian deformations

    NASA Astrophysics Data System (ADS)

    Baran, H.; Krasil'shchik, I. S.; Morozov, O. I.; Voj?k, P.

    2014-03-01

    We present examples of Lax-integrable multi-dimensional systems of partial differential equations with higher local symmetries. We also consider Lagrangian deformations of these equations and construct variational bivectors on them.

  1. Genuinely multi-dimensional explicit and implicit generalized Shapiro filters for weather forecasting, computational fluid dynamics and aeroacoustics

    NASA Astrophysics Data System (ADS)

    Falissard, F.

    2013-11-01

    This paper addresses the extension of one-dimensional filters in two and three space dimensions. A new multi-dimensional extension is proposed for explicit and implicit generalized Shapiro filters. We introduce a definition of explicit and implicit generalized Shapiro filters that leads to very simple formulas for the analyses in two and three space dimensions. We show that many filters used for weather forecasting, high-order aerodynamic and aeroacoustic computations match the proposed definition. Consequently the new multi-dimensional extension can be easily implemented in existing solvers. The new multi-dimensional extension and the two commonly used methods are compared in terms of compactness, robustness, accuracy and computational cost. Benefits of the genuinely multi-dimensional extension are assessed for various computations using the compressible Euler equations.

  2. The INTERGROWTH-21st Project Neurodevelopment Package: A Novel Method for the Multi-Dimensional Assessment of Neurodevelopment in Pre-School Age Children

    PubMed Central

    Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose

    2014-01-01

    Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589

  3. Convergence of an Engquist-Osher scheme for a multi-dimensional triangular system of conservation laws

    NASA Astrophysics Data System (ADS)

    Coclite, G. M.; Mishra, S.; Risebro, N. H.

    2010-01-01

    We consider a multi-dimensional triangular system of conservation laws. This system arises as a model of three-phase flow in porous media and includes multi-dimensional conservation laws with discontinuous coefficients as a special case. The system is neither strictly hyperbolic nor symmetric. We propose an Engquist-Osher type scheme for this system and show that the approximate solutions generated by the scheme converge to a weak solution. Numerical examples are also presented.

  4. Scale-PC shielding analysis sequences

    SciTech Connect

    Bowman, S.M.

    1996-05-01

    The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.

  5. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  6. [Multi-dimensional structure quality control over Salvia miltiorrhiza injection based on component structure theory].

    PubMed

    Hu, Shao-Ying; Feng, Liang; Zhang, Ming-Hua; Gu, Jun-Fei; Jia, Xiao-Bin

    2013-12-01

    As the preparation process from Salvia miltiorrhiz herbs to S. miltiorrhiz injection involves complicated technology and has relatively more factors impacting quality safety, the overall quality control is required for its effectiveness and safety. On the basis of the component structure theory, and according to the material basis of S. miltiorrhiz injection, we discussed the multi-dimensional structure and process dynamic quality control technology system of the preparation, in order to achieve the quality control over the material basis with safety and effectiveness of S. miltiorrhiz injection, and provide new ideas and methods for production quality standardization of S. miltiorrhis injection. PMID:24791548

  7. Investigation of multi-dimensional computational models for calculating pollutant transport

    SciTech Connect

    Pepper, D W; Cooper, R E; Baker, A J

    1980-01-01

    A performance study of five numerical solution algorithms for multi-dimensional advection-diffusion prediction on mesoscale grids was made. Test problems include transport of point and distributed sources, and a simulation of a continuous source. In all cases, analytical solutions are available to assess relative accuracy. The particle-in-cell and second-moment algorithms, both of which employ sub-grid resolution coupled with Lagrangian advection, exhibit superior accuracy in modeling a point source release. For modeling of a distributed source, algorithms based upon the pseudospectral and finite element interpolation concepts, exhibit improved accuracy on practical discretizations.

  8. 2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2001-01-01

    A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.

  9. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    SciTech Connect

    Fawley, William M.

    2002-03-25

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  10. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  11. Polarized Line Formation in Multi-dimensional Media. V. Effects of Angle-dependent Partial Frequency Redistribution

    NASA Astrophysics Data System (ADS)

    Anusha, L. S.; Nagendra, K. N.

    2012-02-01

    The solution of polarized radiative transfer equation with angle-dependent (AD) partial frequency redistribution (PRD) is a challenging problem. Modeling the observed, linearly polarized strong resonance lines in the solar spectrum often requires the solution of the AD line transfer problems in one-dimensional or multi-dimensional (multi-D) geometries. The purpose of this paper is to develop an understanding of the relative importance of the AD PRD effects and the multi-D transfer effects and particularly their combined influence on the line polarization. This would help in a quantitative analysis of the second solar spectrum (the linearly polarized spectrum of the Sun). We consider both non-magnetic and magnetic media. In this paper we reduce the Stokes vector transfer equation to a simpler form using a Fourier decomposition technique for multi-D media. A fast numerical method is also devised to solve the concerned multi-D transfer problem. The numerical results are presented for a two-dimensional medium with a moderate optical thickness (effectively thin) and are computed for a collisionless frequency redistribution. We show that the AD PRD effects are significant and cannot be ignored in a quantitative fine analysis of the line polarization. These effects are accentuated by the finite dimensionality of the medium (multi-D transfer). The presence of magnetic fields (Hanle effect) modifies the impact of these two effects to a considerable extent.

  12. A lock-free priority queue design based on multi-dimensional linked lists

    DOE PAGESBeta

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  13. A lock-free priority queue design based on multi-dimensional linked lists

    SciTech Connect

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN) for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.

  14. Multi-dimensional respiratory motion tracking from markerless optical surface imaging based on deformable mesh registration

    NASA Astrophysics Data System (ADS)

    Schaerer, Jol; Fassi, Aurora; Riboldi, Marco; Cerveri, Pietro; Baroni, Guido; Sarrut, David

    2012-01-01

    Real-time optical surface imaging systems offer a non-invasive way to monitor intra-fraction motion of a patient's thorax surface during radiotherapy treatments. Due to lack of point correspondence in dynamic surface acquisition, such systems cannot currently provide 3D motion tracking at specific surface landmarks, as available in optical technologies based on passive markers. We propose to apply deformable mesh registration to extract surface point trajectories from markerless optical imaging, thus yielding multi-dimensional breathing traces. The investigated approach is based on a non-rigid extension of the iterative closest point algorithm, using a locally affine regularization. The accuracy in tracking breathing motion was quantified in a group of healthy volunteers, by pair-wise registering the thoraco-abdominal surfaces acquired at three different respiratory phases using a clinically available optical system. The motion tracking accuracy proved to be maximal in the abdominal region, where breathing motion mostly occurs, with average errors of 1.09 mm. The results demonstrate the feasibility of recovering multi-dimensional breathing motion from markerless optical surface acquisitions by using the implemented deformable registration algorithm. The approach can potentially improve respiratory motion management in radiation therapy, including motion artefact reduction or tumour motion compensation by means of internal/external correlation models.

  15. Multi-Dimensional Validation Impact Tests on PZT 95/5 and ALOX

    NASA Astrophysics Data System (ADS)

    Furnish, M. D.; Robbins, J.; Trott, W. M.; Chhabildas, L. C.; Lawrence, R. J.; Montgomery, S. T.

    2002-07-01

    Multi-dimensional impact tests were conducted on the ferroelectric ceramic PZT 95/5 and alumina-loaded epoxy (ALOX) encapsulants, with the purpose of providing benchmarks for material models in the ALEGRA wavecode. Diagnostics used included line-imaging VISAR (velocity interferometry), a key diagnostic for such tests. Results from four tests conducted with ALOX cylinders impacted by nonplanar copper projectiles were compared with ALEGRA simulations. The simulation produced approximately correct attenuations and divergence, but somewhat higher wave velocities. Several sets of tests conducted using PZT rods (length:diameter ratio = 5:1) encapsulated in ALOX, and diagnosed with line-imaging and point VISAR, were modeled as well. Significant improvement in wave arrival times and waveforms agreement for the two-material multi-dimensional experiments was achieved by simultaneous multiple parameter optimization on multiple one-dimensional experiments. Additionally, a variable friction interface was studied in these calculations. We conclude further parameter optimization is required for both material models.

  16. Identifying associations between pig pathologies using a multi-dimensional machine learning methodology

    PubMed Central

    2012-01-01

    Background Abattoir detected pathologies are of crucial importance to both pig production and food safety. Usually, more than one pathology coexist in a pig herd although it often remains unknown how these different pathologies interrelate to each other. Identification of the associations between different pathologies may facilitate an improved understanding of their underlying biological linkage, and support the veterinarians in encouraging control strategies aimed at reducing the prevalence of not just one, but two or more conditions simultaneously. Results Multi-dimensional machine learning methodology was used to identify associations between ten typical pathologies in 6485 batches of slaughtered finishing pigs, assisting the comprehension of their biological association. Pathologies potentially associated with septicaemia (e.g. pericarditis, peritonitis) appear interrelated, suggesting on-going bacterial challenges by pathogens such as Haemophilus parasuis and Streptococcus suis. Furthermore, hepatic scarring appears interrelated with both milk spot livers (Ascaris suum) and bacteria-related pathologies, suggesting a potential multi-pathogen nature for this pathology. Conclusions The application of novel multi-dimensional machine learning methodology provided new insights into how typical pig pathologies are potentially interrelated at batch level. The methodology presented is a powerful exploratory tool to generate hypotheses, applicable to a wide range of studies in veterinary research. PMID:22937883

  17. Ion-acoustic solitary waves and their multi-dimensional instability in a magnetized degenerate plasma

    SciTech Connect

    Haider, M. M.; Mamun, A. A.

    2012-10-15

    A rigorous theoretical investigation has been made on Zakharov-Kuznetsov (ZK) equation of ion-acoustic (IA) solitary waves (SWs) and their multi-dimensional instability in a magnetized degenerate plasma which consists of inertialess electrons, inertial ions, negatively, and positively charged stationary heavy ions. The ZK equation is derived by the reductive perturbation method, and multi-dimensional instability of these solitary structures is also studied by the small-k (long wave-length plane wave) perturbation expansion technique. The effects of the external magnetic field are found to significantly modify the basic properties of small but finite-amplitude IA SWs. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable IA SWs. The basic features (viz., amplitude, width, instability, etc.) and the underlying physics of the IA SWs, which are relevant to space and laboratory plasma situations, are briefly discussed.

  18. Multi-dimensional self-esteem and magnitude of change in the treatment of anorexia nervosa.

    PubMed

    Collin, Paula; Karatzias, Thanos; Power, Kevin; Howard, Ruth; Grierson, David; Yellowlees, Alex

    2016-03-30

    Self-esteem improvement is one of the main targets of inpatient eating disorder programmes. The present study sought to examine multi-dimensional self-esteem and magnitude of change in eating psychopathology among adults participating in a specialist inpatient treatment programme for anorexia nervosa. A standardised assessment battery, including multi-dimensional measures of eating psychopathology and self-esteem, was completed pre- and post-treatment for 60 participants (all white Scottish female, mean age=25.63 years). Statistical analyses indicated that self-esteem improved with eating psychopathology and weight over the course of treatment, but that improvements were domain-specific and small in size. Global self-esteem was not predictive of treatment outcome. Dimensions of self-esteem at baseline (Lovability and Moral Self-approval), however, were predictive of magnitude of change in dimensions of eating psychopathology (Shape and Weight Concern). Magnitude of change in Self-Control and Lovability dimensions were predictive of magnitude of change in eating psychopathology (Global, Dietary Restraint, and Shape Concern). The results of this study demonstrate that the relationship between self-esteem and eating disorder is far from straightforward, and suggest that future research and interventions should focus less exclusively on self-esteem as a uni-dimensional psychological construct. PMID:26837476

  19. Incorporating scale into digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Dragut, L. D.; Eisank, C.; Strasser, T.

    2009-04-01

    Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Dr?gu? et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Dr?gu?, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.

  20. Stride Segmentation during Free Walk Movements Using Multi-Dimensional Subsequence Dynamic Time Warping on Inertial Sensor Data

    PubMed Central

    Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M.

    2015-01-01

    Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489

  1. Stride segmentation during free walk movements using multi-dimensional subsequence dynamic time warping on inertial sensor data.

    PubMed

    Barth, Jens; Oberndorfer, Ccilia; Pasluosta, Cristian; Schlein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jrgen; Klucken, Jochen; Eskofier, Bjrn M

    2015-01-01

    Changes in gait patterns provide important information about individuals' health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson's disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489

  2. Scale-specific multifractal medical image analysis.

    PubMed

    Braverman, Boris; Tambasco, Mauro

    2013-01-01

    Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rnyi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

  3. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

  4. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM

    PubMed Central

    Singh, Brajesh K.; Srivastava, Vineet K.

    2015-01-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639

  5. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    PubMed

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639

  6. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  7. Extending the Implicit Association Test (IAT): Assessing Consumer Attitudes Based on Multi-Dimensional Implicit Associations

    PubMed Central

    Gattol, Valentin; Sääksjärvi, Maria; Carbon, Claus-Christian

    2011-01-01

    Background The authors present a procedural extension of the popular Implicit Association Test (IAT; [1]) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe–unsafe; young–old; innovative–conventional, etc.) rather than on a single evaluative dimension only (e.g., good–bad). Methodology/Principal Findings In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. Conclusions/Significance This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher [2], [3]. PMID:21246037

  8. Multi-dimensional surface NMR imaging and characterization of selected aquifers in the Western US.

    NASA Astrophysics Data System (ADS)

    Walsh, D. O.

    2006-12-01

    This work outlines the development of a multi-channel surface NMR instrument and its application to 2-D and 3-D imaging and characterization of various aquifers in the western US. The multi-channel surface NMR instrumentation and the mathematical foundations for multi-dimensional surface NMR are described. Experimental results, including 2-D and 3-D estimates of porosity and T2* (a measured NMR signal parameter empirically related to permeability), are presented from field tests conducted over a variety of aquifer types: an alluvial aquifer system in western Nebraska, an alluvial and fractured bedrock environment in central Iowa, a Karst environment in southeast Minnesota, and a basaltic aquifer system near the Columbia River in south- central Washington.

  9. Sequential acquisition of multi-dimensional heteronuclear chemical shift correlation spectra with 1H detection

    NASA Astrophysics Data System (ADS)

    Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Grlach, Matthias; Ramachandran, Ramadurai

    2014-03-01

    RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here.

  10. Multi-dimensional on-particle detection technology for multi-category disease classification.

    PubMed

    Tan, Jie; Chen, Xiaomin; Du, Guansheng; Luo, Qiaohui; Li, Xiao; Liu, Yaqing; Liang, Xiao; Wu, Jianmin

    2016-02-28

    A serum peptide profile contains important bio-information, which may help disease classification. The motivation of this study is to take advantage of porous silicon microparticles with multiple surface chemistries to reduce the loss of peptide information and simplify the sample pretreatment. We developed a multi-dimensional on-particle MALDI-TOF technology to acquire high fidelity and cross-reactive molecular fingerprints for mining disease information. The peptide fingerprint of serum samples from colorectal cancer patients, liver cancer patients and healthy volunteers were measured with this technology. The featured mass spectral peaks can successfully discriminate and predict the multi-category disease. Data visualization for future clinical application was also demonstrated. PMID:26839921

  11. Evaluation of multi-dimensional flux models for radiative transfer in combustion chambers: A review

    NASA Astrophysics Data System (ADS)

    Selcuk, N.

    1984-01-01

    In recent years, flux methods have been widely employed as alternative, albeit intrinsically less accurate, procedures to the zone or Monte Carlo methods in complete prediction procedures. Flux models of radiation fields take the form of partial differential equations, which can conveniently and economically be solved simultaneously with the equations representing flow and reaction. The flux models are usually tested and evaluated from the point of view of predictive accuracy by comparing their predictions with "exact' values produced using the zone or Monte Carlo models. Evaluations of various multi-dimensional flux-type models, such as De Marco and Lockwood, Discrete-Ordinate, Schuster-Schwarzschild and moment, are reviewed from the points of view of both accuracy and computational economy. Six-flux model of Schuster-Schwarzschild type with angular subdivisions related to the enclosure geometry is recommended for incorporation into existing procedures for complete mathematical modelling of rectangular combustion chambers.

  12. Deadlock-free class routes for collective communications embedded in a multi-dimensional torus network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip

    2013-01-29

    A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.

  13. High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.

  14. MULTI-DIMENSIONAL MASS SPECTROMETRY-BASED SHOTGUN LIPIDOMICS AND NOVEL STRATEGIES FOR LIPIDOMIC ANALYSES

    PubMed Central

    Han, Xianlin; Yang, Kui; Gross, Richard W.

    2011-01-01

    Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525

  15. Multi-dimensional single-spin nano-optomechanics with a levitated nanodiamond

    NASA Astrophysics Data System (ADS)

    Neukirch, Levi P.; von Haartman, Eva; Rosenholm, Jessica M.; Nick Vamivakas, A.

    2015-10-01

    Considerable advances made in the development of nanomechanical and nano-optomechanical devices have enabled the observation of quantum effects, improved sensitivity to minute forces, and provided avenues to probe fundamental physics at the nanoscale. Concurrently, solid-state quantum emitters with optically accessible spin degrees of freedom have been pursued in applications ranging from quantum information science to nanoscale sensing. Here, we demonstrate a hybrid nano-optomechanical system composed of a nanodiamond (containing a single nitrogen-vacancy centre) that is levitated in an optical dipole trap. The mechanical state of the diamond is controlled by modulation of the optical trapping potential. We demonstrate the ability to imprint the multi-dimensional mechanical motion of the cavity-free mechanical oscillator into the nitrogen-vacancy centre fluorescence and manipulate the mechanical system's intrinsic spin. This result represents the first step towards a hybrid quantum system based on levitating nanoparticles that simultaneously engages optical, phononic and spin degrees of freedom.

  16. Ionizing shocks in argon. Part II: Transient and multi-dimensional effects

    NASA Astrophysics Data System (ADS)

    Kapper, M. G.; Cambier, J.-L.

    2011-06-01

    We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.

  17. Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests

    SciTech Connect

    Niles, A M; Garcia, F; Greenwood, D W; Forbes, J W; Tarver, C M; Chidester, S K; Garza, R G; Swizter, L L

    2001-05-31

    The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 {micro}s after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.

  18. Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae

    SciTech Connect

    Tanaka, Masaomi; Maeda, Keiichi; Mazzali, Paolo A.; Nomoto, Ken'ichi

    2008-05-21

    We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.

  19. A dynamic nuclear polarization strategy for multi-dimensional Earth's field NMR spectroscopy.

    PubMed

    Halse, Meghan E; Callaghan, Paul T

    2008-12-01

    Dynamic nuclear polarization (DNP) is introduced as a powerful tool for polarization enhancement in multi-dimensional Earth's field NMR spectroscopy. Maximum polarization enhancements, relative to thermal equilibrium in the Earth's magnetic field, are calculated theoretically and compared to the more traditional prepolarization approach for NMR sensitivity enhancement at ultra-low fields. Signal enhancement factors on the order of 3000 are demonstrated experimentally using DNP with a nitroxide free radical, TEMPO, which contains an unpaired electron which is strongly coupled to a neighboring (14)N nucleus via the hyperfine interaction. A high-quality 2D (19)F-(1)H COSY spectrum acquired in the Earth's magnetic field with DNP enhancement is presented and compared to simulation. PMID:18926746

  20. Multi-dimensional coherent optical spectroscopy of semiconductor nanostructures: Collinear and non-collinear approaches

    SciTech Connect

    Nardin, Gaël; Li, Hebin; Autry, Travis M.; Moody, Galan; Singh, Rohan; Cundiff, Steven T.

    2015-03-21

    We review our recent work on multi-dimensional coherent optical spectroscopy (MDCS) of semiconductor nanostructures. Two approaches, appropriate for the study of semiconductor materials, are presented and compared. A first method is based on a non-collinear geometry, where the Four-Wave-Mixing (FWM) signal is detected in the form of a radiated optical field. This approach works for samples with translational symmetry, such as Quantum Wells (QWs) or large and dense ensembles of Quantum Dots (QDs). A second method detects the FWM in the form of a photocurrent in a collinear geometry. This second approach extends the horizon of MDCS to sub-diffraction nanostructures, such as single QDs, nanowires, or nanotubes, and small ensembles thereof. Examples of experimental results obtained on semiconductor QW structures are given for each method. In particular, it is shown how MDCS can assess coupling between excitons confined in separated QWs.

  1. Optimal sensor configuration for flexible structures with multi-dimensional mode shapes

    NASA Astrophysics Data System (ADS)

    Chang, Minwoo; Pakzad, Shamim N.

    2015-05-01

    A framework for deciding the optimal sensor configuration is implemented for civil structures with multi-dimensional mode shapes, which enhances the applicability of structural health monitoring for existing structures. Optimal sensor placement (OSP) algorithms are used to determine the best sensor configuration for structures with a priori knowledge of modal information. The signal strength at each node is evaluated by effective independence and modified variance methods. Euclidean norm of signal strength indices associated with each node is used to expand OSP applicability into flexible structures. The number of sensors for each method is determined using the threshold for modal assurance criterion (MAC) between estimated (from a set of observations) and target mode shapes. Kriging is utilized to infer the modal estimates for unobserved locations with a weighted sum of known neighbors. A Kriging model can be expressed as a sum of linear regression and random error which is assumed as the realization of a stochastic process. This study presents the effects of Kriging parameters for the accurate estimation of mode shapes and the minimum number of sensors. The feasible ranges to satisfy MAC criteria are investigated and used to suggest the adequate searching bounds for associated parameters. The finite element model of a tall building is used to demonstrate the application of optimal sensor configuration. The dynamic modes of flexible structure at centroid are appropriately interpreted into the outermost sensor locations when OSP methods are implemented. Kriging is successfully used to interpolate the mode shapes from a set of sensors and to monitor structures associated with multi-dimensional mode shapes.

  2. Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport

    NASA Astrophysics Data System (ADS)

    Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun

    2015-02-01

    We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.

  3. Multi-dimensional multi-species modeling of transient electrodeposition in LIGA microfabrication.

    SciTech Connect

    Evans, Gregory Herbert; Chen, Ken Shuang

    2004-06-01

    This report documents the efforts and accomplishments of the LIGA electrodeposition modeling project which was headed by the ASCI Materials and Physics Modeling Program. A multi-dimensional framework based on GOMA was developed for modeling time-dependent diffusion and migration of multiple charged species in a dilute electrolyte solution with reduction electro-chemical reactions on moving deposition surfaces. By combining the species mass conservation equations with the electroneutrality constraint, a Poisson equation that explicitly describes the electrolyte potential was derived. The set of coupled, nonlinear equations governing species transport, electric potential, velocity, hydrodynamic pressure, and mesh motion were solved in GOMA, using the finite-element method and a fully-coupled implicit solution scheme via Newton's method. By treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and by repeatedly performing re-meshing with CUBIT and re-mapping with MAPVAR, the moving deposition surfaces were tracked explicitly from start of deposition until the trenches were filled with metal, thus enabling the computation of local current densities that potentially influence the microstructure and frictional/mechanical properties of the deposit. The multi-dimensional, multi-species, transient computational framework was demonstrated in case studies of two-dimensional nickel electrodeposition in single and multiple trenches, without and with bath stirring or forced flow. Effects of buoyancy-induced convection on deposition were also investigated. To further illustrate its utility, the framework was employed to simulate deposition in microscreen-based LIGA molds. Lastly, future needs for modeling LIGA electrodeposition are discussed.

  4. The application of multi-dimensional access memories to radar signal processing systems

    NASA Astrophysics Data System (ADS)

    Hayes, David; Strawhorne, Bill

    1986-07-01

    A multi-dimensional access memory (MDAM) allows a word to be accessed from store either in the manner it was entered or as part of a bit slice of equally spaced or contiguous words. Conceptually, data may be regarded as being stored in an n dimensional hypercube of side length equal to the word length that usefully maps onto a wide range of signal processing operations, (e.g., FFTs, matrix inversion, multiple moments, distance metrics, sorts, searches and correlation decodes), when associated processing units that can carry out both bit parallel and bit serial arithmetic are used. The mapping of the natural multi-dimensionality of a signal processing task onto the MDAM structure is shown to be particularly useful when bit serial, word parallel processors are employed. In these circumstances the facilities of the MDAM make possible a range of useful operations that could only be implemented with great inefficiency using conventional memories. Furthermore, the MDAM considerably simplifies address generation for the I/O of real and complex words (e.g., the corner turn of incoming samples) while allowing useful permutations, such as barrel shifts, to be applied on each memory access for a insignificant cost in extra circuitry. Highly efficient and deeply pipelined, implementations of MDAM/processor structures are discussed that are particulary well suited to VLSI methodologies, in that very wide bandwidth interconnection networks of high complexity can be achieved at relatively low gate and pin counts. Thus, it is possible to form highly parallel multi-MDAM/processor structures that support very high levels of concurrency, identified as necessary for future radar signal processing systems. Moreover these structures translate over classes of operations that are not normally associated with each other. Consequently, these forms can be made extremely general and modular to produce powerful and compact processing kernels for programmable systems that embody high level signal processing constructs in their VLSI fabric and lead to high performance at the minimum silicon cost.

  5. TWO-DIMENSIONAL CORE-COLLAPSE SUPERNOVA MODELS WITH MULTI-DIMENSIONAL TRANSPORT

    SciTech Connect

    Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun E-mail: burrows@astro.princeton.edu

    2015-02-10

    We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant O(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate O(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ''ray-by-ray'' approach employed by all other groups may be compromising their results. We show that ''ray-by-ray'' calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.

  6. Wavelet Analysis of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Minske, J. K.; Watkins, R.; Feldman, H.; Freese, K.

    1995-12-01

    The existance and behavior of structures in the luminous matter distribution is an excellent diagnostic of conditions in the early universe, so it is imperative to extract as much information as possible from the density field. We introduce a method of identifying structures using wavelet analysis, which, without smoothing and without bias, can simultaneously probe both the spatial and scale aspects of a matter distribution; we use this method to classify structures according to both size and position. After testing this technique on simulated data, we apply the method to the LP and QDOT surveys, and present a catalog of the structures in these regions. Further, in order to measure the extent of clustering in these catalogs, we must choose a statistical approach that takes advantage of the two-dimensionality of our data. We present one technique, the correlation surface, which gives us a scale-by-scale insight into the spatial distribution of luminous matter.

  7. MAI (Multi-Dimensional Activity Based Integrated Approach): A Strategy for Cognitive Development of the Learners at the Elementary Stage

    ERIC Educational Resources Information Center

    Basantia, Tapan Kumar; Panda, B. N.; Sahoo, Dukhabandhu

    2012-01-01

    Cognitive development of the learners is the prime task of each and every stage of our school education and its importance especially in elementary state is quite worth mentioning. Present study investigated the effectiveness of a new and innovative strategy (i.e., MAI (multi-dimensional activity based integrated approach)) for the development of…

  8. Multi-dimensional gas chromatography with a planar microfluidic device for the characterization of volatile oxygenated organic compounds.

    PubMed

    Luong, J; Gras, R; Cortes, H; Shellie, R A

    2012-09-14

    Oxygenated compounds like methanol, ethanol, 1-propanol, 2-propanol, 1-butanol, acetaldehyde, crotonaldehyde, ethylene oxide, tetrahydrofuran, 1,4-dioxane, 1,3-dioxolane, and 2-chloromethyl-1,3-dioxolane are commonly encountered in industrial manufacturing processes. Despite the availability of a variety of column stationary phases for chromatographic separation, it is difficult to separate these solutes from their respective matrices using single dimension gas chromatography. Implemented with a planar microfluidic device, conventional two-dimensional gas chromatography and the employment of chromatographic columns using dissimilar separation mechanisms like that of a selective wall-coated open tubular column and an ionic sorbent column have been successfully applied to resolve twelve industrially significant volatile oxygenated compounds in both gas and aqueous matrices. A Large Volume Gas Injection System (LVGIS) was also employed for sample introduction to enhance system automation and precision. By successfully integrating these concepts, in addition to having the capability to separate all twelve components in one single analysis, features associated with multi-dimensional gas chromatography like dual retention time capability, and the ability to quarantine undesired chromatographic contaminants or matrix components in the first dimension column to enhance overall system cleanliness were realized. With this technique, a complete separation for all the compounds mentioned can be carried out in less than 15 min. The compounds cited can be analyzed over a range of 250 ppm (v/v) to 100 ppm (v/v) with a relative standard deviation of less than 5% (n=20) with high degree of reliability. PMID:22410155

  9. Semiquantal molecular dynamics simulations of hydrogen-bond dynamics in liquid water using multi-dimensional Gaussian wave packets

    NASA Astrophysics Data System (ADS)

    Ono, Junichi; Ando, Koji

    2012-11-01

    A semiquantal (SQ) molecular dynamics (MD) simulation method based on an extended Hamiltonian formulation has been developed using multi-dimensional thawed Gaussian wave packets (WPs), and applied to an analysis of hydrogen-bond (H-bond) dynamics in liquid water. A set of Hamilton's equations of motion in an extended phase space, which includes variance-covariance matrix elements as auxiliary coordinates representing anisotropic delocalization of the WPs, is derived from the time-dependent variational principle. The present theory allows us to perform real-time and real-space SQMD simulations and analyze nuclear quantum effects on dynamics in large molecular systems in terms of anisotropic fluctuations of the WPs. Introducing the Liouville operator formalism in the extended phase space, we have also developed an explicit symplectic algorithm for the numerical integration, which can provide greater stability in the long-time SQMD simulations. The application of the present theory to H-bond dynamics in liquid water is carried out under a single-particle approximation in which the variance-covariance matrix and the corresponding canonically conjugate matrix are reduced to block-diagonal structures by neglecting the interparticle correlations. As a result, it is found that the anisotropy of the WPs is indispensable for reproducing the disordered H-bond network compared to the classical counterpart with the use of the potential model providing competing quantum effects between intra- and intermolecular zero-point fluctuations. In addition, the significant WP delocalization along the out-of-plane direction of the jumping hydrogen atom associated with the concerted breaking and forming of H-bonds has been detected in the H-bond exchange mechanism. The relevance of the dynamical WP broadening to the relaxation of H-bond number fluctuations has also been discussed. The present SQ method provides the novel framework for investigating nuclear quantum dynamics in the many-body molecular systems in which the local anisotropic fluctuations of nuclear WPs play an essential role.

  10. MUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results

    PubMed Central

    Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.

    2008-01-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 10?4 samples in range and 2.2 10?3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 10?3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications. PMID:18807190

  11. Effect of a template in the synthesis of multi-dimensional nanoporous aluminosilicate with the composition 25% Al2O3-75% SiO2

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. F.; Eremenko, S. I.

    2015-07-01

    Samples of multi-dimensional nanoporous aluminosilicate with the composition (25% Al2O3-75% SiO2) are synthesized using the template effect of supramolecular cetylpyridinium chloride. The samples are studied by means of low-temperature nitrogen static adsorption-desorption, X-ray diffraction, scanning electron microscopy, and FT-IR spectroscopy. Changes in the specific surface area, volume, and DFT distribution of mesopores are shown to depend on the template concentration, annealing temperature, and sample training temperature prior to analysis. When using 5.0% of the template, we observe the formation of an aluminosilicate mesophase with a three-dimensional MCM-48 cubic pore system, homogeneous mesoporosity, and the excellent textural characteristics typical of a well-organized cellular structure.

  12. Optimizing threshold for extreme scale analysis

    NASA Astrophysics Data System (ADS)

    Maynard, Robert; Moreland, Kenneth; Atyachit, Utkarsh; Geveci, Berk; Ma, Kwan-Liu

    2013-01-01

    As the HPC community starts focusing its efforts towards exascale, it becomes clear that we are looking at machines with a billion way concurrency. Although parallel computing has been at the core of the performance gains achieved until now, scaling over 1,000 times the current concurrency can be challenging. As discussed in this paper, even the smallest memory access and synchronization overheads can cause major bottlenecks at this scale. As we develop new software and adapt existing algorithms for exascale, we need to be cognizant of such pitfalls. In this paper, we document our experience with optimizing a fairly common and parallelizable visualization algorithm, threshold of cells based on scalar values, for such highly concurrent architectures. Our experiments help us identify design patterns that can be generalized for other visualization algorithms as well. We discuss our implementation within the Dax toolkit, which is a framework for data analysis and visualization at extreme scale. The Dax toolkit employs the patterns discussed here within the framework's scaffolding to make it easier for algorithm developers to write algorithms without having to worry about such scaling issues.

  13. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  14. On SCALE Validation for PBR Analysis

    SciTech Connect

    Ilas, Germina

    2010-01-01

    Studies were performed to assess the capabilities of the SCALE code system to provide accurate cross sections for analyses of pebble bed reactor configurations. The analyzed configurations are representative of fuel in the HTR-10 reactor in the first critical core and at full power operation conditions. Relevant parameters-multiplication constant, spectral indices, few-group cross sections-are calculated with SCALE for the considered configurations. The results are compared to results obtained with corresponding consistent MCNP models. The code-to-code comparison shows good agreement at both room and operating temperatures, indicating a good performance of SCALE for analysis of doubly heterogeneous fuel configurations. The development of advanced methods and computational tools for the analysis of pebble bed reactor (PBR) configurations has been a research area of renewed interest for the international community during recent decades. The PBR, which is a High Temperature Gas Cooled Reactor (HTGR) system, represents one of the potential candidates for future deployment throughout the world of reactor systems that would meet the increased requirements of efficiency, safety, and proliferation resistance and would support other applications such as hydrogen production or nuclear waste recycling. In the U.S, the pebble bed design is one of the two designs under consideration by the Next Generation Nuclear Plant (NGNP) Program.

  15. Evaluation of an Expanded Disability Status Scale (EDSS) modeling strategy in multiple sclerosis.

    PubMed

    Cao, Hua; Peyrodie, Laurent; Agnani, Olivier; Cavillon, Fabrice; Hautecoeur, Patrick; Donzé, Cécile

    2015-11-01

    The Expanded Disability Status Scale (EDSS) is the most widely used scale to evaluate the degree of neurological impairment in multiple sclerosis (MS). In this paper, we report on the evaluation of an EDSS modeling strategy based on recurrence quantification analysis (RQA) of posturographic data (i.e., center of pressure, COP). A total of 133 volunteers with EDSS ranging from 0 to 4.5 participated in this study, with eyes closed. After selection of time delay (τ), embedding dimension (m) as well as threshold (radius, r) to identify recurrent points, several RQA measures were calculated for each COP's position and velocity data in the mono- and multi-dimensional RQAs. Estimation results lead to the selection of the recurrence rate (RR) of the COP's position as the most pertinent RQA measure. The performance of the models versus raw and noisy data was higher in the mono-dimensional analysis than in the multi-dimensional. This study suggests that the posturographic signal's mono-dimensional RQA is a more pertinent method to quantify disability in MS than the multi-dimensional RQA. PMID:26345244

  16. A multi-dimensional experiment for characterization of pore structure heterogeneity using NMR

    NASA Astrophysics Data System (ADS)

    Lewis, Rhiannon T.; Seland, John Georg

    2016-02-01

    In a liquid saturated porous sample the spatial inhomogeneous internal magnetic field in general depends on the strength of the static magnetic field, the differences in magnetic susceptibilities, but also on the geometry of the porous network. To thoroughly investigate how the internal field can be used to determine various properties of the porous structure, we present a novel multi-dimensional NMR experiment that enables us to measure several dynamic correlations in one experiment, and where all of the correlations involve the internal magnetic field and its dependence on the geometry of the porous network. (Correlations: internal gradient - pore size, internal gradient - magnetic susceptibility difference, internal gradient - longitudinal relaxation, longitudinal relaxation - magnetic susceptibility difference.) It is always a spatial average of the internal magnetic field, or one of the related properties, that is measured, which is important to take into consideration when analyzing the obtained results. We demonstrate how these correlations can be an indicator for pore structure heterogeneity, and focus in particular on how the effect from spatial averaging can be evaluated and taken into account in the different cases.

  17. Software Defined Networking (SDN) controlled all optical switching networks with multi-dimensional switching architecture

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng

    2014-08-01

    Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.

  18. Multi-dimensional permutation-modulation format for coherent optical communications.

    PubMed

    Ishimura, Shota; Kikuchi, Kazuro

    2015-06-15

    We introduce the multi-dimensional permutation-modulation format in coherent optical communication systems and analyze its performance, focusing on the power efficiency and the spectral efficiency. In the case of four-dimensional (4D) modulation, the polarization-switched quadrature phase-shift keying (PS-QPSK) modulation format and the polarization quadrature-amplitude modulation (POL-QAM) format can be classified into the permutation modulation format. Other than these well-known modulation formats, we find novel modulation formats trading-off between the power efficiency and the spectral efficiency. With the increase in the dimension, the spectral efficiency can more closely approach the channel capacity predicted from the Shannon's theory. We verify these theoretical characteristics through computer simulations of the symbol-error rate (SER) and bit-error rate (BER) performances. For example, the newly-found eight-dimensional (8D) permutation-modulation format can improve the spectral efficiency up to 2.75 bit/s/Hz/pol/channel, while the power penalty against QPSK is about 1 dB at BER=10(-3). PMID:26193538

  19. Multi-dimensional tumor detection in automated whole breast ultrasound using topographic watershed.

    PubMed

    Lo, Chung-Ming; Chen, Rong-Tai; Chang, Yeun-Chung; Yang, Ya-Wen; Hung, Ming-Jen; Huang, Chiun-Sheng; Chang, Ruey-Feng

    2014-07-01

    Automated whole breast ultrasound (ABUS) is becoming a popular screening modality for whole breast examination. Compared to conventional handheld ultrasound, ABUS achieves operator-independent and is feasible for mass screening. However, reviewing hundreds of slices in an ABUS image volume is time-consuming. A computer-aided detection (CADe) system based on watershed transform was proposed in this study to accelerate the reviewing. The watershed transform was applied to gather similar tissues around local minima to be homogeneous regions. The likelihoods of being tumors of the regions were estimated using the quantitative morphology, intensity, and texture features in the 2-D/3-D false positive reduction (FPR). The collected database comprised 68 benign and 65 malignant tumors. As a result, the proposed system achieved sensitivities of 100% (133/133), 90% (121/133), and 80% (107/133) with FPs/pass of 9.44, 5.42, and 3.33, respectively. The figure of merit of the combination of three feature sets is 0.46 which is significantly better than that of other feature sets ( [Formula: see text]). In summary, the proposed CADe system based on the multi-dimensional FPR using the integrated feature set is promising in detecting tumors in ABUS images. PMID:24718570

  20. Hierarchical multi-dimensional limiting strategy for correction procedure via reconstruction

    NASA Astrophysics Data System (ADS)

    Park, Jin Seok; Kim, Chongam

    2016-03-01

    Hierarchical multi-dimensional limiting process (MLP) is improved and extended for flux reconstruction or correction procedure via reconstruction (FR/CPR) on unstructured grids. MLP was originally developed in finite volume method (FVM) and it provides an accurate, robust and efficient oscillation-control mechanism in multiple dimensions for linear reconstruction. This limiting philosophy can be hierarchically extended into higher-order Pn approximation or reconstruction. The resulting algorithm is referred to as the hierarchical MLP and facilitates detailed capture of flow structures while maintaining formal order-of-accuracy in a smooth region and providing accurate non-oscillatory solutions across a discontinuous region. This algorithm was developed within modal DG framework, but it can also be formulated into a nodal framework, most notably the FR/CPR framework. Troubled-cells are detected by applying the MLP concept, and the final accuracy is determined by a projection procedure and the hierarchical MLP limiting step. Extensive numerical analyses and computations, ranging from two-dimensional to three-dimensional fluid systems, have demonstrated that the proposed limiting approach yields outstanding performances in capturing compressible inviscid and viscous flow features.

  1. [Multi-dimensional structure quality control technology system of Danmu injection based on component structural theory].

    PubMed

    Yin, Rong; Zhu, Fen-Xia; Li, Xiu-Feng; Jia, Xiao-Bin

    2013-11-01

    Danmu is one of common medicines in folks of Li nationality, with such effects in clearing heat and removing toxicity, antisepsis and anti-inflammation. Danmu injection, which is developed with Danmu herbs, has been clinically applied for years and showed curative efficacy. Currently, though many studies have been conducted to analyze chemical constituents in Danmu in details, its pharmacodynamic material basis related to disease prevention and treatment has not been defined. Furthermore, as the quality control methods for Danmu and its preparations remain restricted to single index component and irrational to some extent, it fails to ensure their inherent quality. On the basis of the summary of previous study results, as well as the "component structural theory" of the material basis, we established a "multi-dimensional structure quality control technology system" that is capable of reflecting the integrity of effects of Danmu injection and component structure hierarchy, and performed a dynamic monitoring over the whole process from medicinal materials and preparation products, so as to ensure the inherent quality of Danmu injection. PMID:24494545

  2. Multi-dimensional SAR tomography for monitoring the deformation of newly built concrete buildings

    NASA Astrophysics Data System (ADS)

    Ma, Peifeng; Lin, Hui; Lan, Hengxing; Chen, Fulong

    2015-08-01

    Deformation often occurs in buildings at early ages, and the constant inspection of deformation is of significant importance to discover possible cracking and avoid wall failure. This paper exploits the multi-dimensional SAR tomography technique to monitor the deformation performances of two newly built buildings (B1 and B2) with a special focus on the effects of concrete creep and shrinkage. To separate the nonlinear thermal expansion from total deformations, the extended 4-D SAR technique is exploited. The thermal map estimated from 44 TerraSAR-X images demonstrates that the derived thermal amplitude is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that B1 is subject to settlement during the construction period, in addition, the creep and shrinkage of B1 lead to wall shortening that is a height-dependent movement in the downward direction, and the asymmetrical creep of B2 triggers wall deflection that is a height-dependent movement in the deflection direction. It is also validated that the extended 4-D SAR can rectify the bias of estimated wall shortening and wall deflection by 4-D SAR.

  3. Three faces of self-face recognition: potential for a multi-dimensional diagnostic tool.

    PubMed

    Sugiura, Motoaki

    2015-01-01

    The recognition of self-face is a unique and complex phenomenon in many aspects, including its associated perceptual integration process, its emergence during development, and its socio-motivational effect. This may explain the failure of classical attempts to identify the cortical areas specifically responsive to self-face and designate them as a unique system related to 'self'. Neuroimaging findings regarding self-face recognition seem to be explained comprehensively by a recent forward-model account of the three categories of self: the physical, interpersonal, and social selves. Self-face-specific activation in the sensory and motor association cortices may reflect cognitive scrutiny due to prediction error or task-induced top-down attention in the physical internal schema related to the self-face. Self-face-specific deactivation in some amodal association cortices in the dorsomedial frontal and lateral posterior cortices may reflect adaptive suppression of the default recruitment of the social-response system during face recognition. Self-face-specific activation under a social context in the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex may reflect cognitive scrutiny of the internal schema related to the social value of the self. The multi-facet nature of self-face-specific activation may hold potential as the basis for a multi-dimensional diagnostic tool for the cognitive system. PMID:25450313

  4. Opportunities in multi dimensional trace metal imaging: Taking copper associated disease research to the next level

    PubMed Central

    Vogt, Stefan; Ralle, Martina

    2012-01-01

    Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies. PMID:23079951

  5. Off-Center Thawed Gaussian Multi-Dimensional Approximation for Semiclassical Propagation

    NASA Astrophysics Data System (ADS)

    Kocia, Lucas; Heller, Eric

    2014-03-01

    The Off-Center Thawed Gaussian Approximation's (OCTGA) performance in multi-dimensional coupled systems is shown in comparison to Herman-Kluk (HK), the current workhorse of semiclassical propagation in the field. As with the Heller-Huber method and Van Voorhis et al.'s nearly-real method of trajectories, OCTGA requires only a single trajectory and associated stability matrix at every timestep to compute Gaussian wave packet overlaps under any Hamiltonian. This is in sharp contrast to HK which suffers from the necessity of having to propagate thousands or more computationally expensive stability matrices at every timestep. Unlike similar methods, the OCTGA relies upon a single real guiding trajectory, which in general does not start at the center of the initial wave packet. This guiding ``off-center'' trajectory is used to expand the local potential, controlling the propagating ``thawed'' Gaussian wavepacket such that it is led to optimal overlap with a final state. Its simple and efficient performance in any number of dimensions heralds an exciting addition to the semiclassical tools available for quantum propagation.

  6. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  7. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  8. Operationalising the Sustainable Knowledge Society Concept through a Multi-dimensional Scorecard

    NASA Astrophysics Data System (ADS)

    Dragomirescu, Horatiu; Sharma, Ravi S.

    Since the early 21st Century, building a Knowledge Society represents an aspiration not only for the developed countries, but for the developing ones too. There is an increasing concern worldwide for rendering this process manageable towards a sustainable, equitable and ethically sound societal system. As proper management, including at the societal level, requires both wisdom and measurement, the operationalisation of the Knowledge Society concept encompasses a qualitative side, related to vision-building, and a quantitative one, pertaining to designing and using dedicated metrics. The endeavour of enabling policy-makers mapping, steering and monitoring the sustainable development of the Knowledge Society at national level, in a world increasingly based on creativity, learning and open communication, led researchers to devising a wide range of composite indexes. However, as such indexes are generated through weighting and aggregation, their usefulness is limited to retrospectively assessing and comparing levels and states already attained; therefore, to better serve policy-making purposes, composite indexes should be complemented by other instruments. Complexification, inspired by the systemic paradigm, allows obtaining "rich pictures" of the Knowledge Society; to this end, a multi-dimensional scorecard of the Knowledge Society development is hereby suggested, that seeks a more contextual orientation towards sustainability. It is assumed that, in the case of the Knowledge Society, the sustainability condition goes well beyond the "greening" desideratum and should be of a higher order, relying upon the conversion of natural and productive life-cycles into virtuous circles of self-sustainability.

  9. Frequency of multi-dimensional COPD indices and relation with disease activity markers.

    PubMed

    Garca-Rio, Francisco; Soriano, Joan B; Miravitlles, Marc; Muoz, Luis; Duran-Tauleria, Enric; Snchez, Guadalupe; Sobradillo, Victor; Ancochea, Julio

    2013-08-01

    Our aim was to describe the population-based distribution of several COPD multi-dimensional indices and to evaluate their relationship with daily physical activity, co-morbidity, health status and systemic inflammatory biomarkers. From a population-based sample of 3,802 subjects aged 40-80 from the EPI-SCAN study, 382 subjects (10.2%) with a post-bronchodilator FEV1/FVC<0.7 were identified as COPD. Smoking habits, respiratory symptoms, quality of life, co-morbidities, lung function and inflammatory biomarkers were recorded. Health status and daily physical activity were assessed using the EQ-5D and LCADL questionnaires, respectively. The new GOLD grading and the BODE, ADO, DOSE, modified DOSE, e-BODE, BODEx, CPI, SAFE and HRS indices were determined. A notable dispersion in the total scores was observed, although 83-88% of the COPD patients were classified into the mildest level and 1-3% in the most severe. The SAFE index was the best independent determinant of daily physical activity; the SAFE and ADO indices were associated with presence of co-morbidity; and the SAFE and modified DOSE indices were independently related to health status. The systemic biomarkers showed a less consistent relation with several indices. In a population-based sample of COPD patients, the SAFE index reaches the highest relation with physical activity, co-morbidity and health status. PMID:23537163

  10. Preliminary Investigation of Momentary Bed Failure Using a Multi-dimensional Eulerian Two-phase Model

    NASA Astrophysics Data System (ADS)

    Cheng, Z.; Hsu, T. J.; Calantoni, J.

    2014-12-01

    In the past decade, researchers have clearly been making progress in predicting coastal erosion/recovery; however, evidences are also clear that existing coastal evolution models cannot predict coastal responses subject to extreme storm events. In this study, we investigate the dynamics of momentary bed failure driven by large horizontal pressure gradients, which may be the dominant sediment transport mechanism under intense storm condition. Recently, a multi-dimensional two-phase Eulerian sediment transport model has been developed and disseminated to the research community as an open-source code. The numerical model is based on extending an open-source CFD library of solvers, OpenFOAM. Model results were validated with published sediment concentration and velocity data measured in steady and oscillatory flow. The 2DV Reynolds-averaged model showed wave-like bed instabilities when the criteria of momentary bed failure was exceeded. These bed instabilities were responsible for the large transport rate observed during plug flow and the onset of the instabilities was associated with a large erosion depth. To better resolve the onset of bed instabilities, subsequent energy cascade and the resulting large sediment transport rate and sediment pickup flux, 3D turbulence-resolving simulations were also carried out. Detailed validation of the 3D turbulence-resolving Eulerian two-phase model will be presented along with the expanded investigation on the dynamics of momentary bed failure.

  11. A SECOND-ORDER GODUNOV METHOD FOR MULTI-DIMENSIONAL RELATIVISTIC MAGNETOHYDRODYNAMICS

    SciTech Connect

    Beckwith, Kris; Stone, James M. E-mail: jstone@astro.princeton.edu

    2011-03-15

    We describe a new Godunov algorithm for relativistic magnetohydrodynamics (RMHD) that combines a simple, unsplit second-order accurate integrator with the constrained transport (CT) method for enforcing the solenoidal constraint on the magnetic field. A variety of approximate Riemann solvers are implemented to compute the fluxes of the conserved variables. The methods are tested with a comprehensive suite of multi-dimensional problems. These tests have helped us develop a hierarchy of correction steps that are applied when the integration algorithm predicts unphysical states due to errors in the fluxes, or errors in the inversion between conserved and primitive variables. Although used exceedingly rarely, these corrections dramatically improve the stability of the algorithm. We present preliminary results from the application of these algorithms to two problems in RMHD: the propagation of supersonic magnetized jets and the amplification of magnetic field by turbulence driven by the relativistic Kelvin-Helmholtz instability (KHI). Both of these applications reveal important differences between the results computed with Riemann solvers that adopt different approximations for the fluxes. For example, we show that the use of Riemann solvers that include both contact and rotational discontinuities can increase the strength of the magnetic field within the cocoon by a factor of 10 in simulations of RMHD jets and can increase the spectral resolution of three-dimensional RMHD turbulence driven by the KHI by a factor of two. This increase in accuracy far outweighs the associated increase in computational cost. Our RMHD scheme is publicly available as part of the Athena code.

  12. Stability of the replicator equation for a single species with a multi-dimensional continuous trait space.

    PubMed

    Cressman, Ross; Hofbauer, Josef; Riedel, Frank

    2006-03-21

    The replicator equation model for the evolution of individual behaviors in a single species with a multi-dimensional continuous trait space is developed as a dynamics on the set of probability measures. Stability of monomorphisms in this model using the weak topology is compared to more traditional methods of adaptive dynamics. For quadratic fitness functions and initial normal trait distributions, it is shown that the multi-dimensional continuously stable strategy (CSS) of adaptive dynamics is often relevant for predicting stability of the measure-theoretic model but may be too strong in general. For general fitness functions and trait distributions, the CSS is related to dominance solvability which can be used to characterize local stability for a large class of trait distributions that have no gaps in their supports whereas the stronger neighborhood invader strategy (NIS) concept is needed if the supports are arbitrary. PMID:16246372

  13. Thin Pd membrane prepared on macroporous stainless steel tube filter by an in-situ multi-dimensional plating mechanism.

    PubMed

    Tong, Jianhua; Matsumura, Yasuyuki

    2004-11-01

    The big surface pores of a porous stainless steel (PSS) tube filter with marked roughness were jammed with aluminium hydroxide gel by a combination of ultrasonic vibration and vacuum suction, then a thin dense Pd membrane (6 microm) was plated in-situ on this pre-jammed filter by a multi-dimensional plating mechanism; after recovering the substrate pores by high temperature treatment, higher H2 permeance and complete H2 selectivity were obtained. PMID:15514815

  14. Effect of a Multi-Dimensional and Inter-Sectoral Intervention on the Adherence of Psychiatric Patients

    PubMed Central

    Pauly, Anne; Wolf, Carolin; Mayr, Andreas; Lenz, Bernd; Kornhuber, Johannes; Friedland, Kristina

    2015-01-01

    Background In psychiatry, hospital stays and transitions to the ambulatory sector are susceptible to major changes in drug therapy that lead to complex medication regimens and common non-adherence among psychiatric patients. A multi-dimensional and inter-sectoral intervention is hypothesized to improve the adherence of psychiatric patients to their pharmacotherapy. Methods 269 patients from a German university hospital were included in a prospective, open, clinical trial with consecutive control and intervention groups. Control patients (09/2012-03/2013) received usual care, whereas intervention patients (05/2013-12/2013) underwent a program to enhance adherence during their stay and up to three months after discharge. The program consisted of therapy simplification and individualized patient education (multi-dimensional component) during the stay and at discharge, as well as subsequent phone calls after discharge (inter-sectoral component). Adherence was measured by the “Medication Adherence Report Scale” (MARS) and the “Drug Attitude Inventory” (DAI). Results The improvement in the MARS score between admission and three months after discharge was 1.33 points (95% CI: 0.73–1.93) higher in the intervention group compared to controls. In addition, the DAI score improved 1.93 points (95% CI: 1.15–2.72) more for intervention patients. Conclusion These two findings indicate significantly higher medication adherence following the investigated multi-dimensional and inter-sectoral program. Trial Registration German Clinical Trials Register DRKS00006358 PMID:26437449

  15. Multi-dimensional complete ensemble empirical mode decomposition with adaptive noise applied to laser speckle contrast images.

    PubMed

    Humeau-Heurtier, Anne; Mah, Guillaume; Abraham, Pierre

    2015-10-01

    Laser speckle contrast imaging (LSCI) is a noninvasive full-field optical technique which allows analyzing the dynamics of microvascular blood flow. LSCI has attracted attention because it is able to image blood flow in different kinds of tissue with high spatial and temporal resolutions. Additionally, it is simple and necessitates low-cost devices. However, the physiological information that can be extracted directly from the images is not completely determined yet. In this work, a novel multi-dimensional complete ensemble empirical mode decomposition with adaptive noise (MCEEMDAN) is introduced and applied in LSCI data recorded in three physiological conditions (rest, vascular occlusion and post-occlusive reactive hyperaemia). MCEEMDAN relies on the improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and our algorithm is specifically designed to analyze multi-dimensional data (such as images). Over the recent multi-dimensional ensemble empirical mode decomposition (MEEMD), MCEEMDAN has the advantage of leading to an exact reconstruction of the original data. The results show that MCEEMDAN leads to intrinsic mode functions and residue that reveal hidden patterns in LSCI data. Moreover, these patterns differ with physiological states. MCEEMDAN appears as a promising way to extract features in LSCI data for an improvement of the image understanding. PMID:25850087

  16. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    PubMed

    Brant, Claudine; Thurler, Grald; Borst, Franois; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse. PMID:16160254

  17. TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients.

    PubMed

    Loorak, Mona Hosseinkhani; Perin, Charles; Kamal, Noreen; Hill, Michael; Carpendale, Sheelagh

    2016-01-01

    We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan. PMID:26390482

  18. Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty

    PubMed Central

    Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405

  19. Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty.

    PubMed

    Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405

  20. POLARIZED LINE FORMATION IN MULTI-DIMENSIONAL MEDIA. III. HANLE EFFECT WITH PARTIAL FREQUENCY REDISTRIBUTION

    SciTech Connect

    Anusha, L. S.; Nagendra, K. N.

    2011-09-01

    In two previous papers, we solved the polarized radiative transfer (RT) equation in multi-dimensional (multi-D) geometries with partial frequency redistribution as the scattering mechanism. We assumed Rayleigh scattering as the only source of linear polarization (Q/I, U/I) in both these papers. In this paper, we extend these previous works to include the effect of weak oriented magnetic fields (Hanle effect) on line scattering. We generalize the technique of Stokes vector decomposition in terms of the irreducible spherical tensors T{sup K}{sub Q}, developed by Anusha and Nagendra, to the case of RT with Hanle effect. A fast iterative method of solution (based on the Stabilized Preconditioned Bi-Conjugate-Gradient technique), developed by Anusha et al., is now generalized to the case of RT in magnetized three-dimensional media. We use the efficient short-characteristics formal solution method for multi-D media, generalized appropriately to the present context. The main results of this paper are the following: (1) a comparison of emergent (I, Q/I, U/I) profiles formed in one-dimensional (1D) media, with the corresponding emergent, spatially averaged profiles formed in multi-D media, shows that in the spatially resolved structures, the assumption of 1D may lead to large errors in linear polarization, especially in the line wings. (2) The multi-D RT in semi-infinite non-magnetic media causes a strong spatial variation of the emergent (Q/I, U/I) profiles, which is more pronounced in the line wings. (3) The presence of a weak magnetic field modifies the spatial variation of the emergent (Q/I, U/I) profiles in the line core, by producing significant changes in their magnitudes.

  1. Multi-Dimensional Broadband IR Radiative Forcing of Marine Stratocumulus in a Large Eddy Simulation Model

    SciTech Connect

    Mechem, David B.; Ovtchinnikov, Mikhail; Kogan, Y. L.; Davis, Anthony B; Cahalan, Robert F.; Takara, Ezra E.; Ellingson, Robert G.

    2002-06-03

    In order to address the interactive and evolutionary nature of the cloud-radiation interaction, we have coupled to a Large Eddy Simulation (LES) model the sophisticated multi-dimensional radiative transfer (MDRT) scheme of Evans (Spherical Harmonics Discrete Ordinate Method; 1998). Because of computational expense, we are at this time only able to run 2D experiments. Preliminary runs consider only the broadband longwave component, in large part because IR cloud top cooling is the significant forcing mechanism for marine stratocumulus. Little difference is noted in the evolution of unbroken stratocumulus between three-hour runs using MDRT and independent pixel approximation (IPA) for 2D domains of 50 km in the horizontal and 1.5 km in the vertical. Local heating rates differ slightly near undulating regions of cloud top, and a slight bias in mean heating rate from 1 to 3 h is present, yet the differences are never strong enough to result in a pronounced evolutionary bias in typical boundary layer metrics (e.g. inversion height, vertical velocity variance, TKE). Longer integration times may eventually produce a physical response to the bias in radiative cooling rates. A low-CCN case, designed to produce significant drizzle and induce cloud breakup does show subtle differences between MDRT and IPA. Over the course of the 6 hour simulations, entrainment is slightly less in the MDRT case, and the transition to the surface-based trade cumulus regime is delayed. Mean cooling rates appear systematically weaker in the MDRT case, indicative of a less energetic PBL and reflected in profiles of vertical velocity variance and TKE.

  2. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  3. Multi-dimensional Conjunctive Operation Rule for the Water Supply System

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Tan, C. A.; CHEN, Y.; Tung, C.

    2011-12-01

    In recent years, with the increment of floods and droughts, not only in numbers but also in intensities, floods were severer during the wet season and the droughts were more serious during the dry season. In order to reduce their impact on agriculture, industry, and even human being, the conjunctive use of surface water and groundwater has been paid much attention and become a new direction for the future research. Traditionally, the reservoir operation usually follows the operation rule curve to satisfy the water demand and considers only water levels at the reservoirs and time series. The strategy used in the conjunctive-use management model is that the water demand is first satisfied with the reservoirs operated based on the rule curves, and the deficit between demand and supply, if exists, is provided by the groundwater. In this study, we propose a new operation rule, named multi-dimensional conjunctive operation rule curve (MCORC), which is extended from the concept of reservoir operation rule curve. The MCORC is a three-dimensional curve and is applied to both surface water and groundwater. Three sets of parameters, water levels and the supply percentage at reservoirs, groundwater levels and the supply percentage, and time series, are considered simultaneously in the curve. The zonation method and heuristic algorithm are applied to optimize the curve subject to the constraints of the reservoir operation rules and the safety yield of groundwater. The proposed conjunctive operation rule was applied to the water supply system which is analogue to the area in northern Taiwan. The results showed that the MCORC could increase the efficiency of water use and reduce the risk of serious water deficits.

  4. MULTI-DIMENSIONAL FEATURES OF NEUTRINO TRANSFER IN CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S. E-mail: takiwaki.tomoya@nao.ac.jp E-mail: shoichi@heap.phys.waseda.ac.jp

    2015-01-01

    We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S {sub n} method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ∼20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (∼2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.

  5. Detection and analysis of multi-dimensional pulse wave based on optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Shen, Yihui; Li, Zhifang; Li, Hui; Chen, Haiyu

    2014-11-01

    Pulse diagnosis is an important method of traditional Chinese medicine (TCM). Doctors diagnose the patients' physiological and pathological statuses through the palpation of radial artery for radial artery pulse information. Optical coherence tomography (OCT) is an useful tool for medical optical research. Current conventional diagnostic devices only function as a pressure sensor to detect the pulse wave - which can just partially reflect the doctors feelings and lost large amounts of useful information. In this paper, the microscopic changes of the surface skin above radial artery had been studied in the form of images based on OCT. The deformation of surface skin in a cardiac cycle which is caused by arterial pulse is detected by OCT. The patient's pulse wave is calculated through image processing. It is found that it is good consistent with the result conducted by pulse analyzer. The real-time patient's physiological and pathological statuses can be monitored. This research provides a kind of new method for pulse diagnosis of traditional Chinese medicine.

  6. Correlation network analysis for multi-dimensional data in stocks market

    NASA Astrophysics Data System (ADS)

    Kazemilari, Mansooreh; Djauhari, Maman Abdurachman

    2015-07-01

    This paper shows how the concept of vector correlation can appropriately measure the similarity among multivariate time series in stocks network. The motivation of this paper is (i) to apply the RV coefficient to define the network among stocks where each of them is represented by a multivariate time series; (ii) to analyze that network in terms of topological structure of the stocks of all minimum spanning trees, and (iii) to compare the network topology between univariate correlation based on r and multivariate correlation network based on RV coefficient.

  7. Multi-Dimensional Astrophysical Structural and Dynamical Analysis: I. Development of a Nonlinear Finite Element Approach

    NASA Technical Reports Server (NTRS)

    Meier, D. L.

    1998-01-01

    A new field of numerical astrophysics is introduced which addresses the solution of large, multidimensional structural or slowly-evolving problems (rotating stars, interacting binaries, thick advective accretion disks, four dimensional spacetimes, etc.), as well as rapidly evolving systems.

  8. Semiquantal molecular dynamics simulations of hydrogen-bond dynamics in liquid water using multi-dimensional Gaussian wave packets.

    PubMed

    Ono, Junichi; Ando, Koji

    2012-11-01

    A semiquantal (SQ) molecular dynamics (MD) simulation method based on an extended Hamiltonian formulation has been developed using multi-dimensional thawed gaussian wave packets (WPs), and applied to an analysis of hydrogen-bond (H-bond) dynamics in liquid water. A set of Hamilton's equations of motion in an extended phase space, which includes variance-covariance matrix elements as auxiliary coordinates representing anisotropic delocalization of the WPs, is derived from the time-dependent variational principle. The present theory allows us to perform real-time and real-space SQMD simulations and analyze nuclear quantum effects on dynamics in large molecular systems in terms of anisotropic fluctuations of the WPs. Introducing the Liouville operator formalism in the extended phase space, we have also developed an explicit symplectic algorithm for the numerical integration, which can provide greater stability in the long-time SQMD simulations. The application of the present theory to H-bond dynamics in liquid water is carried out under a single-particle approximation in which the variance-covariance matrix and the corresponding canonically conjugate matrix are reduced to block-diagonal structures by neglecting the interparticle correlations. As a result, it is found that the anisotropy of the WPs is indispensable for reproducing the disordered H-bond network compared to the classical counterpart with the use of the potential model providing competing quantum effects between intra- and intermolecular zero-point fluctuations. In addition, the significant WP delocalization along the out-of-plane direction of the jumping hydrogen atom associated with the concerted breaking and forming of H-bonds has been detected in the H-bond exchange mechanism. The relevance of the dynamical WP broadening to the relaxation of H-bond number fluctuations has also been discussed. The present SQ method provides the novel framework for investigating nuclear quantum dynamics in the many-body molecular systems in which the local anisotropic fluctuations of nuclear WPs play an essential role. PMID:23145735

  9. Numerical simulation of multi-dimensional acoustic propagation in air including the effects of molecular relaxation

    NASA Astrophysics Data System (ADS)

    Wochner, Mark

    A computational acoustic propagation model based upon the Navier-Stokes equations is created that is able to simulate the effects of absorption and dispersion due to shear viscosity, bulk viscosity, thermal conductivity and molecular relaxation of nitrogen and oxygen in one or two dimensions. The model uses a fully nonlinear constitutive equation set that is closed using a thermodynamic entropy relation and a van der Waals equation of state. The use of the total variables in the equations rather than the perturbed (acoustical) variables allow for the extension of the model to include wind, temperature profiles, and other frequency independent conditions. The method of including sources in the model also allow for the incorporation of multiple spatially and temporally complex sources. Two numerical methods are used for the solution of the constitutive equations: a dispersion relation preserving scheme, which is shown to be efficient and accurate but unsuitable for shock propagation; and a weighted essentially non-oscillatory scheme which is shown to be able to stably propagate shocks but at considerable computational cost. Both of these algorithms are utilized in this investigation because their individual strengths are appropriate for different situations. It is shown that these models are able to accurately recreate many acoustical phenomena. Wave steepening in a lossless and thermoviscous medium is compared to the Fubini solution and Mendousse's solution to the Burgers equation, respectively, and the Fourier component amplitudes of the first harmonics is shown to differ from these solutions by at most 0.21%. Nonlinear amplification factors upon rigid boundaries for high incident pressures and its comparisons to the Pfriem solution is shown to differ by at most 0.015%. Modified classical absorption, nitrogen relaxation absorption, and oxygen relaxation absorption is shown to differ from the analytical solutions by at most 1%. Finally, the dispersion due to nitrogen relaxation and oxygen relaxation are also shown to differ from the analytical solutions by at most 1%. It is believed that higher resolution grids would decrease the error in all of these simulations. A number of simulations that do not have explicit analytical solutions are then discussed. To demonstrate the model's ability to propagate multi-dimensional shocks in two dimensions, the formation of a Mach stem is simulated. (Abstract shortened by UMI.)

  10. Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data

    NASA Astrophysics Data System (ADS)

    Miensopust, M.; Siemon, B.; Brner, R.; Ansari, S.

    2013-12-01

    Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving engine of an inversion algorithm is its forward solver, applying inversion codes to HEM data is only sensible once the forward modeling results are reliable (and their limits and weaknesses are known and manageable).

  11. Developmental Work Personality Scale: An Initial Analysis.

    ERIC Educational Resources Information Center

    Strauser, David R.; Keim, Jeanmarie

    2002-01-01

    The research reported in this article involved using the Developmental Model of Work Personality to create a scale to measure work personality, the Developmental Work Personality Scale (DWPS). Overall, results indicated that the DWPS may have potential applications for assessing work personality prior to client involvement in comprehensive

  12. Efficient gradient field generation providing a multi-dimensional arbitrary shifted field-free point for magnetic particle imaging

    SciTech Connect

    Kaethner, Christian Ahlborg, Mandy; Buzug, Thorsten M.; Knopp, Tobias; Sattel, Timo F.

    2014-01-28

    Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.

  13. Dynamical scaling analysis of plant callus growth

    NASA Astrophysics Data System (ADS)

    Galeano, J.; Buceta, J.; Juarez, K.; Pumario, B.; de la Torre, J.; Iriondo, J. M.

    2003-07-01

    We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.

  14. A Dual Scaling Analysis for Paired Compositions

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    1971-01-01

    A sensitive measurement model is developed that provides transactional scale values for individual members of a group, as well as an evaluation of pairwise interactions and balances that are emergent properties of the group itself. (Author/DG)

  15. Convective scale weather analysis and forecasting

    NASA Technical Reports Server (NTRS)

    Purdom, J. F. W.

    1984-01-01

    How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.

  16. Determination of aromatic amines in human urine using comprehensive multi-dimensional gas chromatography mass spectrometry (GCxGC-qMS).

    PubMed

    Lamani, Xolelwa; Horst, Simeon; Zimmermann, Thomas; Schmidt, Torsten C

    2015-01-01

    Aromatic amines are an important class of harmful components of cigarette smoke. Nevertheless, only few of them have been reported to occur in urine, which raises questions on the fate of these compounds in the human body. Here we report on the results of a new analytical method, in situ derivatization solid phase microextraction (SPME) multi-dimensional gas chromatography mass spectrometry (GCxGC-qMS), that allows for a comprehensive fingerprint analysis of the substance class in complex matrices. Due to the high polarity of amino compounds, the complex urine matrix and prevalence of conjugated anilines, pretreatment steps such as acidic hydrolysis, liquid-liquid extraction (LLE), and derivatization of amines to their corresponding aromatic iodine compounds are necessary. Prior to detection, the derivatives were enriched by headspace SPME with the extraction efficiency of the SPME fiber ranging between 65 % and 85 %. The measurements were carried out in full scan mode with conservatively estimated limits of detection (LOD) in the range of several ng/L and relative standard deviation (RSD) less than 20 %. More than 150 aromatic amines have been identified in the urine of a smoking person, including alkylated and halogenated amines as well as substituted naphthylamines. Also in the urine of a non-smoker, a number of aromatic amines have been identified, which suggests that the detection of biomarkers in urine samples using a more comprehensive analysis as detailed in this report may be essential to complement the approach of the use of classic biomarkers. PMID:25142049

  17. Minimum Sample Size Requirements for Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…

  18. Confirmatory Factor Analysis of the Cancer Locus of Control Scale.

    ERIC Educational Resources Information Center

    Henderson, Jessica W.; Donatelle, Rebecca J.; Acock, Alan C.

    2002-01-01

    Conducted a confirmatory factor analysis of the Cancer Locus of Control scale (M. Watson and others, 1990), administered to 543 women with a history of breast cancer. Results support a three-factor model of the scale and support use of the scale to assess control dimensions. (SLD)

  19. Minimum Sample Size Requirements for Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum

  20. Detection of crossover time scales in multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Ge, Erjia; Leung, Yee

    2013-04-01

    Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

  1. Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR

    SciTech Connect

    Ishii, M.; Ravankar, S.T.; Dowlati, R.

    1996-04-01

    The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.

  2. Multi-Dimensional, Mesoscopic Monte Carlo Simulations of Inhomogeneous Reaction-Drift-Diffusion Systems on Graphics-Processing Units

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001

  3. Statistical approach and benchmarking for modeling of multi-dimensional behavior in TRISO-coated fuel particles

    NASA Astrophysics Data System (ADS)

    Miller, Gregory K.; Petti, David A.; Varacalle, Dominic J.; Maki, John T.

    2003-04-01

    The fundamental design for a gas-cooled reactor relies on the behavior of the coated particle fuel. The coating layers, termed the TRISO coating, act as a mini-pressure vessel that retains fission products. Results of US irradiation experiments show that many more fuel particles have failed than can be attributed to one-dimensional pressure vessel failures alone. Post-irradiation examinations indicate that multi-dimensional effects, such as the presence of irradiation-induced shrinkage cracks in the inner pyrolytic carbon layer, contribute to these failures. To address these effects, the methods of prior one-dimensional models are expanded to capture the stress intensification associated with multi-dimensional behavior. An approximation of the stress levels enables the treatment of statistical variations in numerous design parameters and Monte Carlo sampling over a large number of particles. The approach is shown to make reasonable predictions when used to calculate failure probabilities for irradiation experiments of the New Production - Modular High Temperature Gas Cooled Reactor Program.

  4. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  5. Relations Between Multidimensional Scaling and Three-Mode Factor Analysis.

    ERIC Educational Resources Information Center

    Tucker, Ledyard R.

    Two lines of psychometric interest are combined: a) multidimensional scaling and, b) factor analysis. This is achieved by employing three-mode factor analysis of scalar product matrices, one for each subject. Two of the modes are the group of objects scaled and the third is the sample of subjects. Resulting from this are, an object space, a person…

  6. Multidimensional Scaling versus Components Analysis of Test Intercorrelations.

    ERIC Educational Resources Information Center

    Davison, Mark L.

    1985-01-01

    Considers the relationship between coordinate estimates in components analysis and multidimensional scaling. Reports three small Monte Carlo studies comparing nonmetric scaling solutions to components analysis. Results are related to other methodological issues surrounding research on the general ability factor, response tendencies in…

  7. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    SciTech Connect

    Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A; Philip, Bobby; Pannala, Sreekanth

    2014-01-01

    A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]). In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors

  8. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    NASA Astrophysics Data System (ADS)

    Allu, S.; Velamur Asokan, B.; Shelton, W. A.; Philip, B.; Pannala, S.

    2014-06-01

    A generalized three dimensional computational model based on unified formulation of electrode-electrolyte system of an electric double layer supercapacitor has been developed. This model accounts for charge transport across the electrode-electrolyte system. It is based on volume averaging, a widely used technique in multiphase flow modeling ([1,2]) and is analogous to porous media theory employed for electrochemical systems [3-5]. A single-domain approach is considered in the formulation where there is no need to model the interfacial boundary conditions explicitly as done in prior literature ([6]). Spatio-temporal variations, anisotropic physical properties, and upscaled parameters from lower length-scale simulations and experiments can be easily introduced in the formulation. Model complexities like irregular geometric configuration, porous electrodes, charge transport and related performance characteristics of the supercapacitor can be effectively captured in higher dimensions. This generalized model also provides insight into the applicability of 1D models ([6]) and where multidimensional effects need to be considered. A sensitivity analysis is presented to ascertain the dependence of the charge and discharge processes on key model parameters. Finally, application of the formulation to non-planar supercapacitors is presented.

  9. Multi-Scale Modeling, Surrogate-Based Analysis, and Optimization of Lithium-Ion Batteries for Vehicle Applications

    NASA Astrophysics Data System (ADS)

    Du, Wenbo

    A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.

  10. Quality Assessment in Early Childhood Programs: A Multi-Dimensional Approach.

    ERIC Educational Resources Information Center

    Fiene, Richard; Melnick, Steven A.

    The relationships among independent observer ratings of a child care program on the Early Childhood Environment Rating Scale (ECERS), state department personnel ratings of program quality using the Child Development Program Evaluation Scale (CDPES), and self-evaluation ratings using the self-assessment instrument designed for the Early Childhood

  11. Single Parameter Galaxy Classification: The Principal Curve through the Multi-dimensional Space of Galaxy Properties

    NASA Astrophysics Data System (ADS)

    Taghizadeh-Popp, M.; Heinis, S.; Szalay, A. S.

    2012-08-01

    We propose to describe the variety of galaxies from the Sloan Digital Sky Survey by using only one affine parameter. To this aim, we construct the principal curve (P-curve) passing through the spine of the data point cloud, considering the eigenspace derived from Principal Component Analysis (PCA) of morphological, physical, and photometric galaxy properties. Thus, galaxies can be labeled, ranked, and classified by a single arc-length value of the curve, measured at the unique closest projection of the data points on the P-curve. We find that the P-curve has a "W" letter shape with three turning points, defining four branches that represent distinct galaxy populations. This behavior is controlled mainly by two properties, namely u - r and star formation rate (from blue young at low arc length to red old at high arc length), while most other properties correlate well with these two. We further present the variations of several important galaxy properties as a function of arc length. Luminosity functions vary from steep Schechter fits at low arc length to double power law and ending in lognormal fits at high arc length. Galaxy clustering shows increasing autocorrelation power at large scales as arc length increases. Cross correlation of galaxies with different arc lengths shows that the probability of two galaxies belonging to the same halo decreases as their distance in arc length increases. PCA analysis allows us to find peculiar galaxy populations located apart from the main cloud of data points, such as small red galaxies dominated by a disk, of relatively high stellar mass-to-light ratio and surface mass density. On the other hand, the P-curve helped us understand the average trends, encoding 75% of the available information in the data. The P-curve allows not only dimensionality reduction but also provides supporting evidence for the following relevant physical models and scenarios in extragalactic astronomy: (1) The hierarchical merging scenario in the formation of a selected group of red massive galaxies. These galaxies present a lognormal r-band luminosity function, which might arise from multiplicative processes involved in this scenario. (2) A connection between the onset of active galactic nucleus activity and star formation quenching as mentioned in Martin et al., which appears in green galaxies transitioning from blue to red populations.

  12. Chemical information based scaling of molecular descriptors: a universal chemical scale for library design and analysis.

    PubMed

    Tounge, Brett A; Pfahler, Lori B; Reynolds, Charles H

    2002-01-01

    Scaling is a difficult issue for any analysis of chemical properties or molecular topology when disparate descriptors are involved. To compare properties across different data sets, a common scale must be defined. Using several publicly available databases (ACD, CMC, MDDR, and NCI) as a basis, we propose to define chemically meaningful scales for a number of molecular properties and topology descriptors. These chemically derived scaling functions have several advantages. First, it is possible to define chemically relevant scales, greatly simplifying similarity and diversity analyses across data sets. Second, this approach provides a convenient method for setting descriptor boundaries that define chemically reasonable topology spaces. For example, descriptors can be scaled so that compounds with little potential for biological activity, bioavailability, or other drug-like characteristics are easily identified as outliers. We have compiled scaling values for 314 molecular descriptors. In addition the 10th and 90th percentile values for each descriptor have been calculated for use in outlier filtering. PMID:12132889

  13. Local variance for multi-scale analysis in geomorphometry.

    PubMed

    Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-07-15

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3נ3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  14. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  15. Scaling Methodology for the Direct ECC Bypass During LBLOCA Reflood Phase With Direct Vessel Injection System: Its Development and Validation

    SciTech Connect

    Cho, H.K.; Park, G.C.; Yun, B.J.; Kwon, T.S.; Song, C.H.

    2002-07-01

    From the two dimensional two-fluid model a new scaling methodology, named the 'modified linear scaling', is suggested for the scientific design of a scaled-down experimental facility and data analysis of the direct ECC bypass under LBLOCA reflood phase. The characteristics of the scaling law are its velocity is scaled by a Wallis-type parameter and the aspect ratio of experimental facility is preserved with that of prototype. For the experimental validation of the proposed scaling law, the air-water tests for direct ECC bypass were performed in the 1/4.0 and 1/7.3 scaled UPTF downcomer test section. The obtained data are compared with those of UPTF Test 21-D. It is found that the modified linear scaling methodology is appropriate for the preservation of multi-dimensional flow phenomena in downcomer annulus, such as direct ECC bypass. (authors)

  16. Discrete implementations of scale transform

    NASA Astrophysics Data System (ADS)

    Djurdjanovic, Dragan; Williams, William J.; Koh, Christopher K.

    1999-11-01

    Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.

  17. Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

    USGS Publications Warehouse

    Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.

    2003-01-01

    This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

  18. Adaptive absorbing boundary conditions for Schrdinger-type equations: Application to nonlinear and multi-dimensional problems

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Han, Houde; Wu, Xiaonan

    2007-08-01

    We propose an adaptive approach in picking the wave-number parameter of absorbing boundary conditions for Schrdinger-type equations. Based on the Gabor transform which captures local frequency information in the vicinity of artificial boundaries, the parameter is determined by an energy-weighted method and yields a quasi-optimal absorbing boundary conditions. It is shown that this approach can minimize reflected waves even when the wave function is composed of waves with different group velocities. We also extend the split local absorbing boundary (SLAB) method [Z. Xu, H. Han, Phys. Rev. E 74 (2006) 037704] to problems in multi-dimensional nonlinear cases by coupling the adaptive approach. Numerical examples of nonlinear Schrdinger equations in one and two dimensions are presented to demonstrate the properties of the discussed absorbing boundary conditions.

  19. High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)

    2002-01-01

    We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].

  20. The Multi-Dimensional Blood/Injury Phobia Inventory: its psychometric properties and relationship with disgust propensity and disgust sensitivity.

    PubMed

    van Overveld, Mark; de Jong, Peter J; Peters, Madelon L

    2011-04-01

    The Multi-Dimensional Blood Phobia Inventory (MBPI; Wenzel & Holt, 2003) is the only instrument available that assesses both disgust and anxiety for blood-phobic stimuli. As inflated levels of disgust propensity (i.e., tendency to experience disgust more readily) are often observed in blood phobia, the MBPI appears a promising instrument for disgust research. First, we examined its psychometric properties. Next, it was examined whether disgust sensitivity (i.e., considering experiencing disgust as something horrid) had added predictive value compared to disgust propensity in blood phobia. Therefore, students and university employees (N = 616) completed the MBPI, indices on blood phobia, disgust propensity and sensitivity. The MBPI proved to be reliable and valid. Further, it correlated moderately to high with disgust propensity and sensitivity. Additionally, disgust propensity and sensitivity were both significant predictors for blood phobia. In conclusion, the MBPI appears a valuable addition to the currently available arsenal of indices to investigate blood phobia. PMID:21075592

  1. High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

  2. Multi-dimensional instability of obliquely propagating ion acoustic solitary waves in electron-positron-ion superthermal magnetoplasmas

    SciTech Connect

    EL-Shamy, E. F.

    2014-08-15

    The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.

  3. Multi-dimensional titanium dioxide with desirable structural qualities for enhanced performance in quantum-dot sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Wu, Dapeng; He, Jinjin; Zhang, Shuo; Cao, Kun; Gao, Zhiyong; Xu, Fang; Jiang, Kai

    2015-05-01

    Multi-dimensional TiO2 hierarchal structures (MD-THS) assembled by mesoporous nanoribbons consisted of oriented aligned nanocrystals are prepared via thermal decomposing Ti-contained gelatin-like precursor. A unique bridge linking mechanism is proposed to illustrate the formation process of the precursor. Moreover, the as-prepared MD-THS possesses high surface area of ?106 cm2 g-1, broad pore size distribution from several nanometers to ?100 nm and oriented assembled primary nanocrystals, which gives rise to high CdS/CdSe quantum dots loading amount and inhibits the carries recombination in the photoanode. Thanks to these structural advantages, the cell derived from MD-THS demonstrates a power conversion efficiency (PCE) of 4.15%, representing ?36% improvement compared with that of the nanocrystal based cell, which permits the promising application of MD-THS as photoanode material in quantum-dot sensitized solar cells.

  4. Phase space structure of multi-dimensional systems by means of the mean exponential growth factor of nearby orbits

    NASA Astrophysics Data System (ADS)

    Cincotta, P. M.; Giordano, C. M.; Simó, C.

    2003-08-01

    In this paper we deal with an alternative technique to study global dynamics in Hamiltonian systems, the mean exponential growth factor of nearby orbits (MEGNO), that proves to be efficient to investigate both regular and stochastic components of phase space. It provides a clear picture of resonance structures, location of stable and unstable periodic orbits as well as a measure of hyperbolicity in chaotic domains which coincides with that given by the Lyapunov characteristic number. Here the MEGNO is applied to a rather simple model, the 3D perturbed quartic oscillator, in order to visualize the structure of its phase space and obtain a quite clear picture of its resonance structure. Examples of application to multi-dimensional canonical maps are also included.

  5. The Attitudes to Ageing Questionnaire: Mokken Scaling Analysis

    PubMed Central

    Shenkin, Susan D.; Watson, Roger; Laidlaw, Ken; Starr, John M.; Deary, Ian J.

    2014-01-01

    Background Hierarchical scales are useful in understanding the structure of underlying latent traits in many questionnaires. The Attitudes to Ageing Questionnaire (AAQ) explored the attitudes to ageing of older people themselves, and originally described three distinct subscales: (1) Psychosocial Loss (2) Physical Change and (3) Psychological Growth. This study aimed to use Mokken analysis, a method of Item Response Theory, to test for hierarchies within the AAQ and to explore how these relate to underlying latent traits. Methods Participants in a longitudinal cohort study, the Lothian Birth Cohort 1936, completed a cross-sectional postal survey. Data from 802 participants were analysed using Mokken Scaling analysis. These results were compared with factor analysis using exploratory structural equation modelling. Results Participants were 51.6% male, mean age 74.0 years (SD 0.28). Three scales were identified from 18 of the 24 items: two weak Mokken scales and one moderate Mokken scale. (1) ‘Vitality’ contained a combination of items from all three previously determined factors of the AAQ, with a hierarchy from physical to psychosocial; (2) ‘Legacy’ contained items exclusively from the Psychological Growth scale, with a hierarchy from individual contributions to passing things on; (3) ‘Exclusion’ contained items from the Psychosocial Loss scale, with a hierarchy from general to specific instances. All of the scales were reliable and statistically significant with ‘Legacy’ showing invariant item ordering. The scales correlate as expected with personality, anxiety and depression. Exploratory SEM mostly confirmed the original factor structure. Conclusions The concurrent use of factor analysis and Mokken scaling provides additional information about the AAQ. The previously-described factor structure is mostly confirmed. Mokken scaling identifies a new factor relating to vitality, and a hierarchy of responses within three separate scales, referring to vitality, legacy and exclusion. This shows what older people themselves consider important regarding their own ageing. PMID:24892302

  6. The importance of a multi-dimensional approach for studying the links between food access and consumption.

    PubMed

    Rose, Donald; Bodor, J Nicholas; Hutchinson, Paul L; Swalm, Chris M

    2010-06-01

    Research on neighborhood food access has focused on documenting disparities in the food environment and on assessing the links between the environment and consumption. Relatively few studies have combined in-store food availability measures with geographic mapping of stores. We review research that has used these multi-dimensional measures of access to explore the links between the neighborhood food environment and consumption or weight status. Early research in California found correlations between red meat, reduced-fat milk, and whole-grain bread consumption and shelf space availability of these products in area stores. Subsequent research in New York confirmed the low-fat milk findings. Recent research in Baltimore has used more sophisticated diet assessment tools and store-based instruments, along with controls for individual characteristics, to show that low availability of healthy food in area stores is associated with low-quality diets of area residents. Our research in southeastern Louisiana has shown that shelf space availability of energy-dense snack foods is positively associated with BMI after controlling for individual socioeconomic characteristics. Most of this research is based on cross-sectional studies. To assess the direction of causality, future research testing the effects of interventions is needed. We suggest that multi-dimensional measures of the neighborhood food environment are important to understanding these links between access and consumption. They provide a more nuanced assessment of the food environment. Moreover, given the typical duration of research project cycles, changes to in-store environments may be more feasible than changes to the overall mix of retail outlets in communities. PMID:20410084

  7. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  8. Rasch Analysis of the Geriatric Depression Scale--Short Form

    ERIC Educational Resources Information Center

    Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.

    2009-01-01

    Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with

  9. SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION

    SciTech Connect

    John Crepeau

    2011-03-01

    Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.

  10. Multiple Time Scale Complexity Analysis of Resting State FMRI

    PubMed Central

    Smith, Robert X.; Yan, Lirong; Wang, Danny J.J.

    2014-01-01

    The present study explored multi-scale entropy (MSE) analysis to investigate the entropy of resting state fMRI signals across multiple time scales. MSE analysis was developed to distinguish random noise from complex signals since the entropy of the former decreases with longer time scales while the latter signal maintains its entropy due to a self-resemblance” across time scales. A long resting state BOLD fMRI (rs-fMRI) scan with 1000 data points was performed on five healthy young volunteers to investigate the spatial and temporal characteristics of entropy across multiple time scales. A shorter rs-fMRI scan with 240 data points was performed on a cohort of subjects consisting of healthy young (age 23±2 years, n=8) and aged volunteers (age 66±3 years, n=8) to investigate the effect of healthy aging on the entropy of rs-fMRI. The results showed that MSE of gray matter, rather than white matter, resembles closely that of f−1 noise over multiple time scales. By filtering out high frequency random fluctuations, MSE analysis is able to reveal enhanced contrast in entropy between gray and white matter, as well as between age groups at longer time scales. Our data support the use of MSE analysis as a validation metric for quantifying the complexity of rs-fMRI signals. PMID:24242271

  11. A Cross-Cultural Comparison of Singaporean and Taiwanese Eighth Graders' Science Learning Self-Efficacy from a Multi-Dimensional Perspective

    ERIC Educational Resources Information Center

    Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung

    2013-01-01

    Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two…

  12. On the well-posedness for a multi-dimensional compressible viscous liquid-gas two-phase flow model in critical spaces

    NASA Astrophysics Data System (ADS)

    Xu, Fuyi; Yuan, Jia

    2015-10-01

    This paper is dedicated to study of the Cauchy problem for a multi-dimensional ({N ? 2}) compressible viscous liquid-gas two-phase flow model. We prove the local well-posedness of the system for large data in critical Besov spaces based on the L p framework under the sole assumption that the initial liquid mass is bounded away from zero.

  13. A Cross-Cultural Comparison of Singaporean and Taiwanese Eighth Graders' Science Learning Self-Efficacy from a Multi-Dimensional Perspective

    ERIC Educational Resources Information Center

    Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung

    2013-01-01

    Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two

  14. Design and rigorous analysis of transformation-optics scaling devices.

    PubMed

    Jiang, Wei Xiang; Xu, Bai Bing; Cheng, Qiang; Cui, Tie Jun; Yu, Guan Xia

    2013-08-01

    Scaling devices that can shrink or enlarge an object are designed using transformation optics. The electromagnetic scattering properties of such scaling devices with anisotropic parameters are rigorously analyzed using the eigenmode expansion method. If the radius of the virtual object is smaller than that of the real object, it is a shrinking device with positive material parameters; if the radius of the virtual object is larger than the real one, it is an enlarging device with positive or negative material parameters. Hence, a scaling device can make a dielectric or metallic object look smaller or larger. The rigorous analysis shows that the scattering coefficients of the scaling devices are the same as those of the equivalent virtual objects. When the radius of the virtual object approaches zero, the scaling device will be an invisibility cloak. In such a case, the scattering effect of the scaling device will be sensitive to material parameters of the device. PMID:24323231

  15. A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory

    NASA Technical Reports Server (NTRS)

    Prozan, R. J.

    1982-01-01

    The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.

  16. On Multi-dimensional Steady Subsonic Flows Determined by Physical Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Weng, Shangkun

    In this thesis, we investigate an inflow-outflow problem for subsonic gas flows in a nozzle with finite length, aiming at finding intrinsic (physically acceptable) boundary conditions on upstream and downstream. We first characterize a set of physical boundary conditions that ensure the existence and uniqueness of a subsonic irrotational flow in a rectangle. Our results show that suppose we prescribe the horizontal incoming flow angle at the inlet and an appropriate pressure at the exit, there exists two positive constants m 0 and m1 with m0 < m1, such that a global subsonic irrotational flow exists uniquely in the nozzle, provided that the incoming mass flux m ? [m0, m 1). The maximum speed will approach the sonic speed as the mass flux m tends to m1. The new difficulties arise from the nonlocal term involved in the mass flux and the pressure condition at the exit. We first introduce an auxiliary problem with the Bernoulli's constant as a parameter to localize the nonlocal term and then establish a monotonic relation between the mass flux and the Bernoulli's constant to recover the original problem. To deal with the loss of obliqueness induced by the pressure condition at the exit, we employ the formulation in terms of the angular velocity and the density. A Moser iteration is applied to obtain the Linfinity estimate of the angular velocity, which guarantees that the flow possesses a positive horizontal velocity in the whole nozzle. As a continuation, we investigate the influence of the incoming flow angle and the geometry structure of the nozzle walls on subsonic flows in a finitely long curved nozzle. It turns out to be interesting that the incoming flow angle and the angles of inclination of nozzle walls play the same role as the end pressure. The curvatures of the nozzle walls play an important role. We also extend our results to subsonic Euler flows in the 2-D and 3-D asymmetric cases. Then it comes to the most interesting and difficult case--the 3-D subsonic Euler flow in a bounded nozzle, which is also the essential part of this thesis. The boundary conditions we have imposed in the 2-D case have a natural extension in the 3-D case. These important clues help us a lot to develop a new formulation to get some insights on the coupling structure between hyperbolic and elliptic modes in the Euler equations. The key idea in our new formulation is to use the Bernoulli's law to reduce the dimension of the velocity field by defining new variables (1,b2=u2u 1,b3=u3 u1) and replacing u1 by the Bernoulli's function B through u21=2B-h r1+ b22+b23 . In this way, we can explore the role of the Bernoulli's law in greater depth and hope that may simplify the Euler equations a little bit. We find a new conserved quantity for flows with a constant Bernoulli's function, which behaves like the scaled vorticity in the 2-D case. More surprisingly, a system of new conservation laws can be derived, which is never been observed before, even in the two dimensional case. We employ this formulation to construct a smooth subsonic Euler flow in a rectangular cylinder by assigning the incoming flow angles and the Bernoulli's function at the inlet and the end pressure at the exit, which is also required to be adjacent to some special subsonic states. The same idea can be applied to obtain similar information for the incompressible Euler equations, the self-similar Euler equations, the steady Euler equations with damping, the steady Euler-Poisson equations and the steady Euler-Maxwell equations. Last, we are concerned with the structural stability of some steady subsonic solutions for the Euler-Poisson system. A steady subsonic solution with subsonic background charge is proven to be structurally stable with respect to small perturbations of the background charge, the incoming flow angles and the end pressure, provided the background solution has a low Mach number and a small electric field. The new ingredient in our mathematical analysis is the solvability of a new second order elliptic system supplemented with oblique derivative conditio

  17. Psychometric Analysis of Role Conflict and Ambiguity Scales in Academia

    ERIC Educational Resources Information Center

    Khan, Anwar; Yusoff, Rosman Bin Md.; Khan, Muhammad Muddassar; Yasir, Muhammad; Khan, Faisal

    2014-01-01

    A comprehensive Psychometric Analysis of Rizzo et al.'s (1970) Role Conflict & Ambiguity (RCA) scales were performed after its distribution among 600 academic staff working in six universities of Pakistan. The reliability analysis includes calculation of Cronbach Alpha Coefficients and Inter-Items statistics, whereas validity was determined by

  18. Scaling range of power laws that originate from fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz; Mazur, Zygmunt

    2013-05-01

    We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

  19. Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)

    1999-01-01

    We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.

  20. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  1. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates. PMID:26646870

  2. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    NASA Astrophysics Data System (ADS)

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  3. Multiple-length-scale deformation analysis in a thermoplastic polyurethane

    PubMed Central

    Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.

    2015-01-01

    Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945

  4. Scheme-scale ambiguity in analysis of QCD observable

    NASA Astrophysics Data System (ADS)

    Mirjalili, A.; A. Kniehl, B.

    2010-09-01

    The scheme-scale ambiguity that has plagued perturbative analysis in QCD remains on obstacle to making precise tests of the theory. Many attempts have been done to resolve the scale ambiguity. In this regard the BLM, EC, PMS and CORGI approaches are more distinct. We try to employ these methods to fix the scale ambiguity at NLO, NNLO and even in more higher order approximations. By optimizing the renormalization scale, there will be a possibility to predicate higher order terms. We present general results for predicted terms at any order, using different optimization methods. Some observable as specific examples will be used to indicate the validity of scale fixing to predicate the higher order terms.

  5. Spectral image fusion based on multi-scale wavelet analysis

    NASA Astrophysics Data System (ADS)

    Machikhin, Alexander S.

    2015-11-01

    The problem of spectral image fusion in order to combine the most informative areas into one image is considered. Algorithm based on joint multi-scale image analysis is discussed. Processing of the images at each pyramid level allows to extract and to combine image features of the same scale. This approach provides a high speed of processing and high quality of the resulting image and may be applicable for real-time applications.

  6. Voice Dysfunction in Dysarthria: Application of the Multi-Dimensional Voice Program.

    ERIC Educational Resources Information Center

    Kent, R. D.; Vorperian, H. K.; Kent, J. F.; Duffy, J. R.

    2003-01-01

    Part 1 of this paper recommends procedures and standards for the acoustic analysis of voice in individuals with dysarthria. In Part 2, acoustic data are reviewed for dysarthria associated with Parkinson disease (PD), cerebellar disease, amytrophic lateral sclerosis, traumatic brain injury, unilateral hemispheric stroke, and essential tremor.

  7. Multi-Dimensional Evaluation for Module Improvement: A Mathematics-Based Case Study

    ERIC Educational Resources Information Center

    Ellery, Karen

    2006-01-01

    Due to a poor module evaluation, mediocre student grades and a difficult teaching experience in lectures, the Data Analysis section of a first year core module, Research Methods for Social Sciences (RMSS), offered at the University of KwaZulu-Natal in South Africa, was completely revised. In order to review the effectiveness of these changes in…

  8. Geographical Scale Effects on the Analysis of Leptospirosis Determinants

    PubMed Central

    Gracie, Renata; Barcellos, Christovam; Magalhes, Mnica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimares

    2014-01-01

    Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536

  9. Geographical scale effects on the analysis of leptospirosis determinants.

    PubMed

    Gracie, Renata; Barcellos, Christovam; Magalhes, Mnica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimares

    2014-01-01

    Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536

  10. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  11. Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)

    PubMed Central

    Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F

    2009-01-01

    Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445

  12. Scale analysis using X-ray microfluorescence and computed radiography

    NASA Astrophysics Data System (ADS)

    Candeias, J. P.; de Oliveira, D. F.; dos Anjos, M. J.; Lopes, R. T.

    2014-02-01

    Scale deposits are the most common and most troublesome damage problems in the oil field and can occur in both production and injection wells. They occur because the minerals in produced water exceed their saturation limit as temperatures and pressures change. Scale can vary in appearance from hard crystalline material to soft, friable material and the deposits can contain other minerals and impurities such as paraffin, salt and iron. In severe conditions, scale creates a significant restriction, or even a plug, in the production tubing. This study was conducted to qualify the elements present in scale samples and quantify the thickness of the scale layer using synchrotron radiation micro-X-ray fluorescence (SR?XRF) and computed radiography (CR) techniques. The SR?XRF results showed that the elements found in the scale samples were strontium, barium, calcium, chromium, sulfur and iron. The CR analysis showed that the thickness of the scale layer was identified and quantified with accuracy. These results can help in the decision making about removing the deposited scale.

  13. Construct distinctiveness and variance composition of multi-dimensional instruments: Three short-form masculinity measures.

    PubMed

    Levant, Ronald F; Hall, Rosalie J; Weigold, Ingrid K; McCurdy, Eric R

    2015-07-01

    Focusing on a set of 3 multidimensional measures of conceptually related but different aspects of masculinity, we use factor analytic techniques to address 2 issues: (a) whether psychological constructs that are theoretically distinct but require fairly subtle discriminations by survey respondents can be accurately captured by self-report measures, and (b) how to better understand sources of variance in subscale and total scores developed from such measures. The specific measures investigated were the: (a) Male Role Norms Inventory-Short Form (MRNI-SF); (b) Conformity to Masculine Norms Inventory-46 (CMNI-46); and (c) Gender Role Conflict Scale-Short Form (GRCS-SF). Data (N = 444) were from community-dwelling and college men who responded to an online survey. EFA results demonstrated the discriminant validity of the 20 subscales comprising the 3 instruments, thus indicating that relatively subtle distinctions between norms, conformity, and conflict can be captured with self-report measures. CFA was used to compare 2 different methods of modeling a broad/general factor for each of the 3 instruments. For the CMNI-46 and MRNI-SF, a bifactor model fit the data significantly better than did a hierarchical factor model. In contrast, the hierarchical model fit better for the GRCS-SF. The discussion addresses implications of these specific findings for use of the measures in research studies, as well as broader implications for measurement development and assessment in other research domains of counseling psychology which also rely on multidimensional self-report instruments. PMID:26167651

  14. Full-scale system impact analysis: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.

  15. Shielding analysis methods available in the scale computational system

    SciTech Connect

    Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

    1986-01-01

    Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

  16. Taking sociality seriously: the structure of multi-dimensional social networks as a source of information for individuals

    PubMed Central

    Barrett, Louise; Henzi, S. Peter; Lusseau, David

    2012-01-01

    Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural ‘knock-outs’ in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054

  17. Closed-cycle cold helium magic-angle spinning for sensitivity-enhanced multi-dimensional solid-state NMR

    NASA Astrophysics Data System (ADS)

    Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi

    2015-10-01

    Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR = 4-12 kHz) at cryogenic temperatures (T = 35-120 K) for over a week without consuming helium at a cost for electricity of 16 kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T = 40 K and B0 = 16.4 T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100 K.

  18. Multi-dimensional modulations of ? and ? cortical dynamics following mindfulness-based cognitive therapy in Major Depressive Disorder.

    PubMed

    Schoenberg, Poppy L A; Speckens, Anne E M

    2015-02-01

    To illuminate candidate neural working mechanisms of Mindfulness-Based Cognitive Therapy (MBCT) in the treatment of recurrent depressive disorder, parallel to the potential interplays between modulations in electro-cortical dynamics and depressive symptom severity and self-compassionate experience. Linear and nonlinear ? and ? EEG oscillatory dynamics were examined concomitant to an affective Go/NoGo paradigm, pre-to-post MBCT or natural wait-list, in 51 recurrent depressive patients. Specific EEG variables investigated were; (1) induced event-related (de-) synchronisation (ERD/ERS), (2) evoked power, and (3) inter-/intra-hemispheric coherence. Secondary clinical measures included depressive severity and experiences of self-compassion. MBCT significantly downregulated ? and ? power, reflecting increased cortical excitability. Enhanced ?-desynchronisation/ERD was observed for negative material opposed to attenuated ?-ERD towards positively valenced stimuli, suggesting activation of neural networks usually hypoactive in depression, related to positive emotion regulation. MBCT-related increase in left-intra-hemispheric ?-coherence of the fronto-parietal circuit aligned with these synchronisation dynamics. Ameliorated depressive severity and increased self-compassionate experience pre-to-post MBCT correlated with ?-ERD change. The multi-dimensional neural mechanisms of MBCT pertain to task-specific linear and non-linear neural synchronisation and connectivity network dynamics. We propose MBCT-related modulations in differing cortical oscillatory bands have discrete excitatory (enacting positive emotionality) and inhibitory (disengaging from negative material) effects, where mediation in the ? and ? bands relates to the former. PMID:26052359

  19. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  20. Do discrimination tasks discourage multi-dimensional stimulus processing? Evidence from a cross-modal object discrimination in rats.

    PubMed

    Jeffery, Kathryn J

    2007-11-01

    Neurobiologists are becoming increasingly interested in how complex cognitive representations are formed by the integration of sensory stimuli. To this end, discrimination tasks are frequently used to assess perceptual and cognitive processes in animals, because they are easy to administer and score, and the ability of an animal to make a particular discrimination establishes beyond doubt that the necessary perceptual/cognitive processes are present. It does not, however, follow that absence of discrimination means the animal cannot make a particular perceptual judgement; it may simply mean that the animal did not manage to discover the relevant discriminative stimulus when trying to learn the task. Here, it is shown that rats did not learn a cross-modal object discrimination (requiring association of each object's visual appearance with its odour) when trained on the complete task from the beginning. However, they could eventually make the discrimination when trained on the component parts step by step, showing that they were able to do the necessary cross-modal integration in the right circumstances. This finding adds to growing evidence that discrimination tasks tend to encourage feature-based discrimination, perhaps by engaging automatic, habit-based brain systems. Thus, they may not be the best way to assess the formation of multi-dimensional stimulus representations of the kind needed in more complex cognitive processes such as declarative memory. Instead, more natural tasks such as spontaneous exploration may be preferable. PMID:17692934

  1. Processing of multi-dimensional sensorimotor information in the spinal and cerebellar neuronal circuitry: a new hypothesis.

    PubMed

    Spanne, Anton; Jrntell, Henrik

    2013-01-01

    Why are sensory signals and motor command signals combined in the neurons of origin of the spinocerebellar pathways and why are the granule cells that receive this input thresholded with respect to their spike output? In this paper, we synthesize a number of findings into a new hypothesis for how the spinocerebellar systems and the cerebellar cortex can interact to support coordination of our multi-segmented limbs and bodies. A central idea is that recombination of the signals available to the spinocerebellar neurons can be used to approximate a wide array of functions including the spatial and temporal dependencies between limb segments, i.e. information that is necessary in order to achieve coordination. We find that random recombination of sensory and motor signals is not a good strategy since, surprisingly, the number of granule cells severely limits the number of recombinations that can be represented within the cerebellum. Instead, we propose that the spinal circuitry provides useful recombinations, which can be described as linear projections through aspects of the multi-dimensional sensorimotor input space. Granule cells, potentially with the aid of differentiated thresholding from Golgi cells, enhance the utility of these projections by allowing the Purkinje cell to establish piecewise-linear approximations of non-linear functions. Our hypothesis provides a novel view on the function of the spinal circuitry and cerebellar granule layer, illustrating how the coordinating functions of the cerebellum can be crucially supported by the recombinations performed by the neurons of the spinocerebellar systems. PMID:23516353

  2. ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels

    DOE PAGESBeta

    Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino

    2015-01-01

    A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 μm in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å3, which is one of the lowest among the known zeolites. ITQ-54 was obtainedmore » together with GeO2 as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.« less

  3. ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels

    SciTech Connect

    Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino

    2015-01-01

    A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 μm in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å3, which is one of the lowest among the known zeolites. ITQ-54 was obtained together with GeO2 as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.

  4. A multi-dimensional discrete-ordinates method for polarized radiative transfer. I. Validation for randomly oriented axisymmetric particles.

    NASA Astrophysics Data System (ADS)

    Haferman, J. L.; Smith, T. F.; Krajewski, W. F.

    1997-09-01

    A polarized multi-dimensional radiative transfer model based on the discrete-ordinates method is presented. The model solves the monochromatic vector radiative transfer equation (VRTE) that considers polarization using the four Stokes parameters. For the VRTE, the intensity of the scalar radiative transfer equation is replaced by the Stokes intensity vector; the position-dependent scalar extinction coefficient is replaced by a direction- and position-dependent 4×4 extinction matrix; the position-dependent scalar absorption coefficient is replaced by a direction- and position-dependent emission (absorption) vector; and the scalar phase function is replaced by a scattering phase matrix. The model can solve the VRTE for anisotropically scattering one-, two-, or three-dimensional Cartesian geometries. Validation for one-dimensional polarized radiative transfer compares model results with benchmark cases available in the literature. For two- and three-dimensional geometries, the model is tested by using a one-dimensional system as input and running in three-dimensional mode. A validation for a three-dimensional geometry based on Kirchoff's law for an isothermal enclosure is also presented.

  5. Tectonic setting of basic igneous and metaigneous rocks of Borborema Province, Brazil using multi-dimensional geochemical discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Verma, Sanjeet K.; Oliveira, Elson P.

    2015-03-01

    Fifteen multi-dimensional diagrams for basic and ultrabasic rocks, based on log-ratio transformations, were used to infer tectonic setting for eight case studies of Borborema Province, NE Brazil. The applications of these diagrams indicated the following results: (1) a mid-ocean ridge setting for Forquilha eclogites (Central Ceará domain) during the Mesoproterozoic; (2) an oceanic plateau setting for Algodões amphibolites (Central Ceará domain) during the Paleoproterozoic; (3) an island arc setting for Brejo Seco amphibolites (Riacho do Pontal belt) during the Proterozoic; (4) an island arc to mid-ocean ridge setting for greenschists of the Monte Orebe Complex (Riacho do Pontal belt) during the Neoproterozoic; (5) within-plate (continental) setting for Vaza Barris domain mafic rocks (Sergipano belt) during the Neoproterozoic; (6) a less precise arc to continental rift for the Gentileza unit metadiorite/gabbro (Sergipano belt) during the Neoproterozoic; (7) an island arc setting for the Novo Gosto unit metabasalts (Sergipano belt) during Neoproterozoic; (8) continental rift setting for Rio Grande do Norte basic rocks during Miocene.

  6. Sensate abstraction: hybrid strategies for multi-dimensional data in expressive virtual reality contexts

    NASA Astrophysics Data System (ADS)

    West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman

    2009-02-01

    ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.

  7. Imaging Multi-Dimensional Electrical Resistivity Structure as a Tool in Developing Enhanced Geothermal Systems (EGS)

    SciTech Connect

    Philip E. Wannamaker

    2007-12-31

    The overall goal of this project has been to develop desktop capability for 3-D EM inversion as a complement or alternative to existing massively parallel platforms. We have been fortunate in having a uniquely productive cooperative relationship with Kyushu University (Y. Sasaki, P.I.) who supplied a base-level 3-D inversion source code for MT data over a half-space based on staggered grid finite differences. Storage efficiency was greatly increased in this algorithm by implementing a symmetric L-U parameter step solver, and by loading the parameter step matrix one frequency at a time. Rules were established for achieving sufficient jacobian accuracy versus mesh discretization, and regularization was much improved by scaling the damping terms according to influence of parameters upon the measured response. The modified program was applied to 101 five-channel MT stations taken over the Coso East Flank area supported by the DOE and the Navy. Inversion of these data on a 2 Gb desktop PC using a half-space starting model recovered the main features of the subsurface resistivity structure seen in a massively parallel inversion which used a series of stitched 2-D inversions as a starting model. In particular, a steeply west-dipping, N-S trending conductor was resolved under the central-west portion of the East Flank. It may correspond to a highly saline magamtic fluid component, residual fluid from boiling, or less likely cryptic acid sulphate alteration, all in a steep fracture mesh. This work gained student Virginia Maris the Best Student Presentation at the 2006 GRC annual meeting.

  8. Multi-dimensional Crustal and Lithospheric Structure of the Atlas Mountains of Morocco by Magnetotelluric Imaging

    NASA Astrophysics Data System (ADS)

    Kiyan, D.; Jones, A. G.; Fullea, J.; Ledo, J.; Siniscalchi, A.; Romano, G.

    2014-12-01

    The PICASSO (Program to Investigate Convective Alboran Sea System Overturn) project and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROSCORES TOPO-EUROPE project) project were designed to collect high resolution, multi-disciplinary lithospheric scale data in order to understand the tectonic evolution and lithospheric structure of the western Mediterranean. The over-arching objectives of the magnetotelluric (MT) component of the projects are (i) to provide new electrical conductivity constraints on the crustal and lithospheric structure of the Atlas Mountains, and (ii) to test the hypotheses for explaining the purported lithospheric cavity beneath the Middle and High Atlas inferred from potential-field lithospheric modeling. We present the results of an MT experiment we carried out in Morocco along two profiles: an approximately N-S oriented profile crossing the Middle Atlas, the High Atlas and the eastern Anti-Atlas to the east (called the MEK profile, for Meknes) and NE-SW oriented profile through western High Atlas to the west (called the MAR profile, for Marrakech). Our results are derived from three-dimensional (3-D) MT inversion of the MT data set employing the parallel version of Modular system for Electromagnetic inversion (ModEM) code. The distinct conductivity differences between the Middle-High Atlas (conductive) and the Anti-Atlas (resistive) correlates with the South Atlas Front fault, the depth extent of which appears to be limited to the uppermost mantle (approx. 60 km). In all inverse solutions, the crust and the upper mantle show resistive signatures (approx. 1,000 Ωm) beneath the Anti-Atlas, which is the part of stable West African Craton. Partial melt and/or exotic fluids enriched in volatiles produced by the melt can account for the high middle to lower crustal and uppermost mantle conductivity in the Folded Middle Atlas, the High Moulouya Plain and the central High Atlas.

  9. Three decades of multi-dimensional change in global leaf phenology

    NASA Astrophysics Data System (ADS)

    Buitenwerf, Robert; Rose, Laura; Higgins, Steven I.

    2015-04-01

    Changes in the phenology of vegetation activity may accelerate or dampen rates of climate change by altering energy exchanges between the land surface and the atmosphere and can threaten species with synchronized life cycles. Current knowledge of long-term changes in vegetation activity is regional, or restricted to highly integrated measures of change such as net primary productivity, which mask details that are relevant for Earth system dynamics. Such details can be revealed by measuring changes in the phenology of vegetation activity. Here we undertake a comprehensive global assessment of changes in vegetation phenology. We show that the phenology of vegetation activity changed severely (by more than 2 standard deviations in one or more dimensions of phenological change) on 54% of the global land surface between 1981 and 2012. Our analysis confirms previously detected changes in the boreal and northern temperate regions. The adverse consequences of these northern phenological shifts for land-surface-climate feedbacks, ecosystems and species are well known. Our study reveals equally severe phenological changes in the southern hemisphere, where consequences for the energy budget and the likelihood of phenological mismatches are unknown. Our analysis provides a sensitive and direct measurement of ecosystem functioning, making it useful both for monitoring change and for testing the reliability of early warning signals of change.

  10. New enhancements to SCALE for criticality safety analysis

    SciTech Connect

    Hollenbach, D.F.; Bowman, S.M.; Petrie, L.M.; Parks, C.V.

    1995-09-01

    As the speed, available memory, and reliability of computer hardware increases and the cost decreases, the complexity and usability of computer software will increase, taking advantage of the new hardware capabilities. Computer programs today must be more flexible and user friendly than those of the past. Within available resources, the SCALE staff at Oak Ridge National Laboratory (ORNL) is committed to upgrading its computer codes to keep pace with the current level of technology. This paper examines recent additions and enhancements to the criticality safety analysis sections of the SCALE code package. These recent additions and enhancements made to SCALE can be divided into nine categories: (1) new analytical computer codes, (2) new cross-section libraries, (3) new criticality search sequences, (4) enhanced graphical capabilities, (5) additional KENO enhancements, (6) enhanced resonance processing capabilities, (7) enhanced material information processing capabilities, (8) portability of the SCALE code package, and (9) other minor enhancements, modifications, and corrections to SCALE. Each of these additions and enhancements to the criticality safety analysis capabilities of the SCALE code system are discussed below.

  11. A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Van Leer, Bram

    1989-01-01

    A scheme of solving the two-dimensional Euler equations is developed. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.

  12. A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Vanleer, Bram

    1989-01-01

    The solution of the two-dimensional Euler equations is based on the two-dimensional linear convection equation and the Euler-equation decomposition developed by Hirsch et al. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative, and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.

  13. Multi-Dimensional High Order Essentially Non-Oscillatory Finite Difference Methods in Generalized Coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1998-01-01

    This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.

  14. Analysis of a scaling rate meter for geothermal systems

    SciTech Connect

    Kreid, D.K.

    1980-03-01

    A research project was conducted to investigate an experimental technique for measuring the rate of formation of mineral scale and corrosion in geothermal systems. A literature review was performed first to identify and evaluate available techniques for measuring scale in heat transfer equipment. As a result of these evaluations, a conceptual design was proposed for a geothermal Scaling Rate Meter (SRM) that would combine features of certain techniques used (or proposed for use) in other applications. An analysis was performed to predict the steady-state performance and expected experimental uncertainty of the proposed SRM. Sample computations were then performed to illustrate the system performance for conditions typical of a geothermal scaling application. Based on these results, recommendations are made regarding prototype SRM construction and testing.

  15. Using Qualitative Methods to Inform Scale Development

    ERIC Educational Resources Information Center

    Rowan, Noell; Wulff, Dan

    2007-01-01

    This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific pros (things they like or would miss out on by not being

  16. Order Analysis: An Inferential Model of Dimensional Analysis and Scaling

    ERIC Educational Resources Information Center

    Krus, David J.

    1977-01-01

    Order analysis is discussed as a method for description of formal structures in multidimensional space. Its algorithm was derived using a combination of psychometric theory, formal logic theory, information theory, and graph theory concepts. The model provides for adjustment of its sensitivity to random variation. (Author/JKS)

  17. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study

  18. Exploratory Factor Analysis of African Self-Consciousness Scale Scores

    ERIC Educational Resources Information Center

    Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.

    2012-01-01

    This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…

  19. Large-scale data analysis using the Wigner function

    NASA Astrophysics Data System (ADS)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  20. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

  1. Exploratory Factor Analysis of African Self-Consciousness Scale Scores

    ERIC Educational Resources Information Center

    Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.

    2012-01-01

    This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly

  2. The vulnerability cube: a multi-dimensional framework for assessing relative vulnerability.

    PubMed

    Lin, Brenda B; Morefield, Philip E

    2011-09-01

    The diversity and abundance of information available for vulnerability assessments can present a challenge to decision-makers. Here we propose a framework to aggregate and present socioeconomic and environmental data in a visual vulnerability assessment that will help prioritize management options for communities vulnerable to environmental change. Socioeconomic and environmental data are aggregated into distinct categorical indices across three dimensions and arranged in a cube, so that individual communities can be plotted in a three-dimensional space to assess the type and relative magnitude of the communities' vulnerabilities based on their position in the cube. We present an example assessment using a subset of the USEPA National Estuary Program (NEP) estuaries: coastal communities vulnerable to the effects of environmental change on ecosystem health and water quality. Using three categorical indices created from a pool of publicly available data (socioeconomic index, land use index, estuary condition index), the estuaries were ranked based on their normalized averaged scores and then plotted along the three axes to form a vulnerability cube. The position of each community within the three-dimensional space communicates both the types of vulnerability endemic to each estuary and allows for the clustering of estuaries with like-vulnerabilities to be classified into typologies. The typologies highlight specific vulnerability descriptions that may be helpful in creating specific management strategies. The data used to create the categorical indices are flexible depending on the goals of the decision makers, as different data should be chosen based on availability or importance to the system. Therefore, the analysis can be tailored to specific types of communities, allowing a data rich process to inform decision-making. PMID:21638079

  3. The Vulnverability Cube: A Multi-Dimensional Framework for Assessing Relative Vulnerability

    NASA Astrophysics Data System (ADS)

    Lin, Brenda B.; Morefield, Philip E.

    2011-09-01

    The diversity and abundance of information available for vulnerability assessments can present a challenge to decision-makers. Here we propose a framework to aggregate and present socioeconomic and environmental data in a visual vulnerability assessment that will help prioritize management options for communities vulnerable to environmental change. Socioeconomic and environmental data are aggregated into distinct categorical indices across three dimensions and arranged in a cube, so that individual communities can be plotted in a three-dimensional space to assess the type and relative magnitude of the communities' vulnerabilities based on their position in the cube. We present an example assessment using a subset of the USEPA National Estuary Program (NEP) estuaries: coastal communities vulnerable to the effects of environmental change on ecosystem health and water quality. Using three categorical indices created from a pool of publicly available data (socioeconomic index, land use index, estuary condition index), the estuaries were ranked based on their normalized averaged scores and then plotted along the three axes to form a vulnerability cube. The position of each community within the three-dimensional space communicates both the types of vulnerability endemic to each estuary and allows for the clustering of estuaries with like-vulnerabilities to be classified into typologies. The typologies highlight specific vulnerability descriptions that may be helpful in creating specific management strategies. The data used to create the categorical indices are flexible depending on the goals of the decision makers, as different data should be chosen based on availability or importance to the system. Therefore, the analysis can be tailored to specific types of communities, allowing a data rich process to inform decision-making.

  4. Instrumentation development for multi-dimensional two-phase flow modeling

    SciTech Connect

    Kirouac, G.J.; Trabold, T.A.; Vassallo, P.F.; Moore, W.E.; Kumar, R.

    1999-06-01

    A multi-faceted instrumentation approach is described which has played a significant role in obtaining fundamental data for two-phase flow model development. This experimental work supports the development of a three-dimensional, two-fluid, four field computational analysis capability. The goal of this development is to utilize mechanistic models and fundamental understanding rather than rely on empirical correlations to describe the interactions in two-phase flows. The four fields (two dispersed and two continuous) provide a means for predicting the flow topology and the local variables over the full range of flow regimes. The fidelity of the model development can be verified by comparisons of the three-dimensional predictions with local measurements of the flow variables. Both invasive and non-invasive instrumentation techniques and their strengths and limitations are discussed. A critical aspect of this instrumentation development has been the use of a low pressure/temperature modeling fluid (R-134a) in a vertical duct which permits full optical access to visualize the flow fields in all two-phase flow regimes. The modeling fluid accurately simulates boiling steam-water systems. Particular attention is focused on the use of a gamma densitometer to obtain line-averaged and cross-sectional averaged void fractions. Hot-film anemometer probes provide data on local void fraction, interfacial frequency, bubble and droplet size, as well as information on the behavior of the liquid-vapor interface in annular flows. A laser Doppler velocimeter is used to measure the velocity of liquid-vapor interfaces in bubbly, slug and annular flows. Flow visualization techniques are also used to obtain a qualitative understanding of the two-phase flow structure, and to obtain supporting quantitative data on bubble size. Examples of data obtained with these various measurement methods are shown.

  5. Multi-dimensional Likelihood Estimation Techniques in conjunction with the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Over, M. W.; Murakami, H.; Hahn, M. S.; Yang, Y.; Rubin, Y.

    2010-12-01

    The method of anchored distributions (MAD, Rubin et al., Water Resour. Res., 2010) is a Bayesian inversion technique that combines geostatistical concepts with a strategy for localization of data that is indirectly related to the target variables, using anchors. Anchors are statistical distributions of the target variables (e.g., the hydraulic conductivity) at specific locations The variable field is described by the statistical distributions of structural parameters that characterize global features and by anchor distributions that intend to capture local effects. The posterior distributions of structural and anchor parameter sets are used to update the approximate spatial distribution of target variable and are generated by re-sampling the parameter sets using their normalized likelihood estimates as the probability of being selected. Increasing the dimension of the data, to include additional information in the likelihood estimate, increases the computational burden. Two measures are taken to accommodate the advantageous additional data without spurious side effects. (1) Partitioning parameter sets into hypercubes, based upon the similarity of the structural parameter values. (2) Principal component analysis, to reduce the dimensionality by discarding a certain percentage of principal components. As an additional feature for large sample sets, or faster calculation, a ‘bundling’ regime can be implemented. Bundling is employed immediately after partitioning the parameter sets into hypercubes. Bundling identifies spatial patterns amongst the realizations generated from the distributions defining the anchor parameters. The added organizational step allows data with reduced sample sizes to be passed to the PCA algorithm. The division of the data set allows for simple parallelization of the computation and our case study achieved a three-fold dimension reduction. Because of the high dimension involved in the calculation, without absurdly large sample sizes, it is reasonable to assume that the data sparsely populates the hyperspace. In order to avoid using an interpolation scheme that would average and smooth the likelihood distribution over extensive regions of unpopulated hyperspace, the data is scanned for clusters using the HOPACH algorithm authored by M. Van der Laan. The density is estimated, over the clusters, non-parametrically. The cluster approximations are summed up using a mixture model to achieve the final likelihood estimate.

  6. Advances in Chemical Physics, Volume 130, 2-Volume Set, Geometric Structures of Phase Space in Multi-Dimensional Chaos: Applications to Chemical Reaction Dynamics in Complex Systems

    NASA Astrophysics Data System (ADS)

    Rice, Stuart A.; Toda, Mikito; Komatsuzaki, Tamiki; Konishi, Tetsuro; Berry, R. Stephen

    2005-01-01

    Edited by Nobel Prize winner Ilya Prigogine and renowned authority Stuart A. Rice, the Advances in Chemical Physics series provides a forum for critical, authoritative evaluations in every area of the discipline. In a format that encourages the expression of individual points of view, experts in the field present comprehensive analyses of subjects of interest. Advances in Chemical Physics remains the premier venue for presentations of new findings in its field. Volume 130 consists of three parts including: Part I: Phase Space Geometry of Multi-dimensional Dynamical Systems and Reaction Processes Part II Complex Dynamical Behavior in Clusters and Proteins, and Data Mining to Extract Information on Dynamics Part III New directions in Multi-Dimensional Chaos and Evolutionary Reactions

  7. Multi-dimensional construction of a novel active yolk@conductive shell nanofiber web as a self-standing anode for high-performance lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Chen, Luyi; Liang, Yeru; Fu, Ruowen; Wu, Dingcai

    2015-11-01

    A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode.A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode. Electronic supplementary information (ESI) available: Experimental details and additional information about material characterization. See DOI: 10.1039/c5nr06531c

  8. Multi-Dimensional Health Assessment Questionnaire in China: Reliability, Validity and Clinical Value in Patients with Rheumatoid Arthritis

    PubMed Central

    Song, Yang; Zhu, Li-an; Wang, Su-li; Leng, Lin; Bucala, Richard; Lu, Liang-Jing

    2014-01-01

    Objective To evaluate the psychometric properties and clinical utility of Chinese Multidimensional Health Assessment Questionnaire (MDHAQ-C) in patients with rheumatoid arthritis (RA) in China. Methods 162 RA patients were recruited in the evaluation process. The reliability of the questionnaire was tested by internal consistency and item analysis. Convergent validity was assessed by correlations of MDHAQ-C with Health Assessment Questionnaire (HAQ), the 36-item Short-Form Health Survey (SF-36) and the Hospital anxiety and depression scales (HAD). Discriminant validity was tested in groups of patients with varied disease activities and functional classes. To evaluate the clinical values, correlations were calculated between MDHAQ-C and indices of clinical relevance and disease activity. Agreement with the Disease Activity Score (DAS28) and Clinical Disease Activity Index (CDAI) was estimated. Results The Cronbach's alpha was 0.944 in the Function scale (FN) and 0.768 in the scale of psychological status (PS). The item analysis indicated all the items of FN and PS are correlated at an acceptable level. MDHAQ-C correlated with the questionnaires significantly in most scales and scores of scales differed significantly in groups of different disease activity and functional status. MDHAQ-C has moderate to high correlation with most clinical indices and high correlation with a spearman coefficient of 0.701 for DAS 28 and 0.843 for CDAI. The overall agreement of categories was satisfying. Conclusion MDHAQ-C is a reliable, valid instrument for functional measurement and a feasible, informative quantitative index for busy clinical settings in Chinese RA patients. PMID:24848431

  9. Multi-dimensional construction of a novel active yolk@conductive shell nanofiber web as a self-standing anode for high-performance lithium-ion batteries.

    PubMed

    Liu, Hao; Chen, Luyi; Liang, Yeru; Fu, Ruowen; Wu, Dingcai

    2015-12-21

    A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode. PMID:26581017

  10. Analysis of patterns of atmospheric motions at different scales

    SciTech Connect

    Ludwig, F.L.

    1993-01-01

    Applications and limitations of fractal concepts to the study of atmospheric motions on scales of tens of meters or a few kilometers and the use of multiresolution feature analysis (MFA) for estimating fractal dimension are described. MFA applies specified correlation filters to a data field at different resolutions to allow the analyst to choose physically significant features for filtering. The scaling of the intensities of the spatial peaks in the filter outputs at the different scales are used to define fractal properties. MFA was extended from two-dimensional scalar applications to three-dimensional vector fields, and applied to observations of motions in sheared atmospheric boundary layers obtained by two National Atmospheric and Oceanic Administration Doppler radar systems (reduced to Cartesian coordinates by Schneider at the University of Oklahoma), and to a corresponding large eddy simulation (LES) data from Costigan and coworkers at Colorado State University. MFA requires definition of physically significant features, that take the form of small scale patterns of motion. Statistical techniques similar to principal component analysis were applied to small subvolumes of data to identify motion patterns that could be used as filters. These small scale patterns differ from case to case, depending on the prevailing boundary layer stability. The most important features exhibit local enhancement and weakening of shear for the more stable conditions, while vortex-like features, tilted in the direction of the shear, are also important for unstable cases. Observed spatial variability of feature intensity patterns at different scales were compared with the LES results, to help understand the energy cascade. Observations suggest a support dimension between 2.3 and 2.5 for the unstable atmosphere's turbulent motions on spatial scales from about 200 m to 1000m. Corresponding LES values indicate less intermittency.

  11. Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale

    USGS Publications Warehouse

    Nykanen, V.; Raines, G.L.

    2006-01-01

    A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.

  12. Simple data acquisition method for multi-dimensional EPR spectral-spatial imaging using a combination of constant-time and projection-reconstruction modalities.

    PubMed

    Matsumoto, Ken-ichiro; Anzai, Kazunori; Utsumi, Hideo

    2009-04-01

    A combination of the constant-time spectral-spatial imaging (CTSSI) modality and projection-reconstruction modality was tested to simplify data acquisition for multi-dimensional CW EPR spectral-spatial imaging. In this method, 3D spectral-spatial image data were obtained by simple repetition of conventional 2D CW imaging process, except that the field gradient amplitude was incremented in constant steps in each repetition. The data collection scheme was no different from the conventional CW imaging system for spectral-spatial data acquisition. No special equipment and/or rewriting of existing software were required. The data acquisition process for multi-dimensional spectral-spatial imaging is consequently simplified. There is also no "missing-angle" issue because the CTSSI modality was employed to reconstruct 2D spectral-spatial images. Extra reconstruction processes to obtain higher spatial dimensions were performed using a conventional projection-reconstruction modality. This data acquisition technique can be applied to any conventional CW EPR (spatial) imaging system for multi-dimensional spectral-spatial imaging. PMID:19138539

  13. Time scale analysis of a digital flight control system

    NASA Technical Reports Server (NTRS)

    Naidu, D. S.; Price, D. B.

    1986-01-01

    In this paper, consideration is given to the fifth order discrete model of an aircraft (longitudinal) control system which possesses three slow (velocity, pitch angle and altitude) and two fast (angle of attack and pitch angular velocity) modes and exhibits a two-time scale property. Using the recent results of the time scale analysis of discrete control systems, the high-order discrete model is decoupled into low-order slow and fast subsystems. The results of the decoupled system are found to be in excellent agreement with those of the original system.

  14. SCALE system cross-section validation for criticality safety analysis

    SciTech Connect

    Hathout, A.M.; Westfall, R.M.; Dodds, H.L. Jr.

    1980-01-01

    The purpose of this study is to test selected data from three cross-section libraries for use in the criticality safety analysis of UO/sub 2/ fuel rod lattices. The libraries, which are distributed with the SCALE system, are used to analyze potential criticality problems which could arise in the industrial fuel cycle for PWR and BWR reactors. Fuel lattice criticality problems could occur in pool storage, dry storage with accidental moderation, shearing and dissolution of irradiated elements, and in fuel transport and storage due to inadequate packing and shipping cask design. The data were tested by using the SCALE system to analyze 25 recently performed critical experiments.

  15. Scaled-particle theory analysis of cylindrical cavities in solution.

    PubMed

    Ashbaugh, Henry S

    2015-04-01

    The solvation of hard spherocylindrical solutes is analyzed within the context of scaled-particle theory, which takes the view that the free energy of solvating an empty cavitylike solute is equal to the pressure-volume work required to inflate a solute from nothing to the desired size and shape within the solvent. Based on our analysis, an end cap approximation is proposed to predict the solvation free energy as a function of the spherocylinder length from knowledge regarding only the solvent density in contact with a spherical solute. The framework developed is applied to extend Reiss's classic implementation of scaled-particle theory and a previously developed revised scaled-particle theory to spherocylindrical solutes. To test the theoretical descriptions developed, molecular simulations of the solvation of infinitely long cylindrical solutes are performed. In hard-sphere solvents classic scaled-particle theory is shown to provide a reasonably accurate description of the solvent contact correlation and resulting solvation free energy per unit length of cylinders, while the revised scaled-particle theory fitted to measured values of the contact correlation provides a quantitative free energy. Applied to the Lennard-Jones solvent at a state-point along the liquid-vapor coexistence curve, however, classic scaled-particle theory fails to correctly capture the dependence of the contact correlation. Revised scaled-particle theory, on the other hand, provides a quantitative description of cylinder solvation in the Lennard-Jones solvent with a fitted interfacial free energy in good agreement with that determined for purely spherical solutes. The breakdown of classical scaled-particle theory does not result from the failure of the end cap approximation, however, but is indicative of neglected higher-order curvature dependences on the solvation free energy. PMID:25974499

  16. Tectonomagmatic origin of Precambrian rocks of Mexico and Argentina inferred from multi-dimensional discriminant-function based discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Pandarinath, Kailasa

    2014-12-01

    Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of Precambrian rocks.

  17. MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A

    SciTech Connect

    Anusha, L. S.; Nagendra, K. N. E-mail: knn@iiap.res.in

    2013-04-20

    Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with inhomogeneities in the chromosphere are required.

  18. Multi-dimensional classification of biomedical text: Toward automated, practical provision of high-utility text to diverse users

    PubMed Central

    Shatkay, Hagit; Pan, Fengxia; Rzhetsky, Andrey; Wilbur, W. John

    2008-01-01

    Motivation: Much current research in biomedical text mining is concerned with serving biologists by extracting certain information from scientific text. We note that there is no ‘average biologist’ client; different users have distinct needs. For instance, as noted in past evaluation efforts (BioCreative, TREC, KDD) database curators are often interested in sentences showing experimental evidence and methods. Conversely, lab scientists searching for known information about a protein may seek facts, typically stated with high confidence. Text-mining systems can target specific end-users and become more effective, if the system can first identify text regions rich in the type of scientific content that is of interest to the user, retrieve documents that have many such regions, and focus on fact extraction from these regions. Here, we study the ability to characterize and classify such text automatically. We have recently introduced a multi-dimensional categorization and annotation scheme, developed to be applicable to a wide variety of biomedical documents and scientific statements, while intended to support specific biomedical retrieval and extraction tasks. Results: The annotation scheme was applied to a large corpus in a controlled effort by eight independent annotators, where three individual annotators independently tagged each sentence. We then trained and tested machine learning classifiers to automatically categorize sentence fragments based on the annotation. We discuss here the issues involved in this task, and present an overview of the results. The latter strongly suggest that automatic annotation along most of the dimensions is highly feasible, and that this new framework for scientific sentence categorization is applicable in practice. Contact: shatkay@cs.queensu.ca PMID:18718948

  19. New Criticality Safety Analysis Capabilities in SCALE 5.1

    SciTech Connect

    Bowman, Stephen M; DeHart, Mark D; Dunn, Michael E; Goluoglu, Sedat; Horwedel, James E; Petrie Jr, Lester M; Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.

  20. Moderated regression analysis and Likert scales: too coarse for comfort.

    PubMed

    Russell, C J; Bobko, P

    1992-06-01

    One of the most commonly accepted models of relationships among three variables in applied industrial and organizational psychology is the simple moderator effect. However, many authors have expressed concern over the general lack of empirical support for interaction effects reported in the literature. We demonstrate in the current sample that use of a continuous, dependent-response scale instead of a discrete, Likert-type scale, causes moderated regression analysis effect sizes to increase an average of 93%. We suggest that use of relatively coarse Likert scales to measure fine dependent responses causes information loss that, although varying widely across subjects, greatly reduces the probability of detecting true interaction effects. Specific recommendations for alternate research strategies are made. PMID:1601825

  1. Bridgman crystal growth in low gravity - A scaling analysis

    NASA Technical Reports Server (NTRS)

    Alexander, J. I. D.; Rosenberger, Franz

    1990-01-01

    The results of an order-of-magnitude or scaling analysis are compared with those of numerical simulations of the effects of steady low gravity on compositional nonuniformity in crystals grown by the Bridgman-Stockbarger technique. In particular, the results are examined of numerical simulations of the effect of steady residual acceleration on the transport of solute in a gallium-doped germanium melt during directional solidification under low-gravity conditions. The results are interpreted in terms of the relevant dimensionless groups associated with the process, and scaling techniques are evaluated by comparing their predictions with the numerical results. It is demonstrated that, when convective transport is comparable with diffusive transport, some specific knowledge of the behavior of the system is required before scaling arguments can be used to make reasonable predictions.

  2. Analysis of Reynolds number scaling for viscous vortex reconnection

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin; Hussain, Fazle; Wang, Jianchun; Chen, Shiyi

    2012-10-01

    A theoretical analysis of viscous vortex reconnection is developed based on scale separation, and the Reynolds number, Re (= circulation/viscosity), scaling for the reconnection time Trec is derived. The scaling varies continuously as Re increases from T_{rec} {mathopRenolimits} ^{ - 1} to T_{rec} {mathopRenolimits} ^{ - 1/2}. This theoretical prediction agrees well with direct numerical simulations by Garten et al. [J. Fluid Mech. 426, 1 (2001)], 10.1017/S0022112000002251 and Hussain and Duraisamy [Phys. Fluids 23, 021701 (2011)], 10.1063/1.3532039. Moreover, our analysis yields two Re's, namely, a characteristic Re {mathopRenolimits} _{0.75} in left[ {Oleft({10^2 } right),Oleft({10^3 } right)} right] for the T_{rec} {mathopRenolimits} ^{ - 0.75} scaling given by Hussain and Duraisamy and the critical Re {mathopRenolimits} _c Oleft({10^4 } right) for the transition after which the first reconnection is completed. For {mathopRenolimits} > {mathopRenolimits} _c, a quiescent state follows, and then, a second reconnection may occur.

  3. Microbial community analysis of a full-scale DEMON bioreactor.

    PubMed

    Gonzalez-Martinez, Alejandro; Rodriguez-Sanchez, Alejandro; Muoz-Palazon, Barbara; Garcia-Ruiz, Maria-Jesus; Osorio, Francisco; van Loosdrecht, Mark C M; Gonzalez-Lopez, Jesus

    2015-03-01

    Full-scale applications of autotrophic nitrogen removal technologies for the treatment of digested sludge liquor have proliferated during the last decade. Among these technologies, the aerobic/anoxic deammonification process (DEMON) is one of the major applied processes. This technology achieves nitrogen removal from wastewater through anammox metabolism inside a single bioreactor due to alternating cycles of aeration. To date, microbial community composition of full-scale DEMON bioreactors have never been reported. In this study, bacterial community structure of a full-scale DEMON bioreactor located at the Apeldoorn wastewater treatment plant was analyzed using pyrosequencing. This technique provided a higher-resolution study of the bacterial assemblage of the system compared to other techniques used in lab-scale DEMON bioreactors. Results showed that the DEMON bioreactor was a complex ecosystem where ammonium oxidizing bacteria, anammox bacteria and many other bacterial phylotypes coexist. The potential ecological role of all phylotypes found was discussed. Thus, metagenomic analysis through pyrosequencing offered new perspectives over the functioning of the DEMON bioreactor by exhaustive identification of microorganisms, which play a key role in the performance of bioreactors. In this way, pyrosequencing has been proven as a helpful tool for the in-depth investigation of the functioning of bioreactors at microbiological scale. PMID:25245398

  4. Multiple scales analysis of interface dynamics in ^4He.

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Ranjan; Prasad, Anoop; Weichman, Peter B.; Miller, Jonathan

    1997-03-01

    We describe theoretically the slow dynamics of the superfluid-normal interface that develops when a uniform heat current is passed through near-critical ^4He (R.V. Duncan, G. Ahlers and V. Steinberg, Phys. Rev. Lett. 60),1522 (1988) and references therein.. Using a multiple scales analysis, along with microscopically derived matching conditions that determine how heat transport is converted from conduction to superfluid counterflow as the interface is crossed, we derive an effective two-dimensional phase equation, resembling somewhat the KPZ equation, for the interface response to internal thermal and external vibrational noise sources, focusing especially on the question of large scale wandering and roughness. We also compare this work with a linear stability analysis which we have carried out. The results are relevant to the proposed NASA microgravity DYNAMX project(Czech. J. Phys. 46), Sup. 1, 87, (1996). We acknowledge financial support from the DYNAMX project..

  5. Validation of inelastic analysis by full-scale component testing

    SciTech Connect

    Griffin, D.S.; Dhalla, A.K.; Woodward, W.S.

    1987-02-01

    This paper compares theoretical and experimental results for full-scale, prototypical components tested at elevated-temperatures to provide validation for inelastic analysis methods, material models, and design limits. Results are discussed for piping elbow plastic and creep buckling, creep ratcheting, and creep relaxation; nozzle creep ratcheting and weld cracking; and thermal striping fatigue. Comparisons between theory and test confirm the adequacy of components to meet design requirements, but identify specific areas where life prediction methods could be made more precise.

  6. Empirical analysis of scaling and fractal characteristics of outpatients

    NASA Astrophysics Data System (ADS)

    Zhang, Li-Jiang; Liu, Zi-Xian; Guo, Jin-Li

    2014-01-01

    The paper uses power-law frequency distribution, power spectrum analysis, detrended fluctuation analysis, and surrogate data testing to evaluate outpatient registration data of two hospitals in China and to investigate the human dynamics of systems that use the first come, first served protocols. The research results reveal that outpatient behavior follow scaling laws. The results also suggest that the time series of inter-arrival time exhibit 1/f noise and have positive long-range correlation. Our research may contribute to operational optimization and resource allocation in hospital based on FCFS admission protocols.

  7. Bicoherence analysis of model-scale jet noise.

    PubMed

    Gee, Kent L; Atchley, Anthony A; Falco, Lauren E; Shepherd, Micah R; Ukeiley, Lawrence S; Jansen, Bernard J; Seiner, John M

    2010-11-01

    Bicoherence analysis has been used to characterize nonlinear effects in the propagation of noise from a model-scale, Mach-2.0, unheated jet. Nonlinear propagation effects are predominantly limited to regions near the peak directivity angle for this jet source and propagation range. The analysis also examines the practice of identifying nonlinear propagation by comparing spectra measured at two different distances and assuming far-field, linear propagation between them. This spectral comparison method can lead to erroneous conclusions regarding the role of nonlinearity when the observations are made in the geometric near field of an extended, directional radiator, such as a jet. PMID:21110528

  8. The scale analysis sequence for LWR fuel depletion

    SciTech Connect

    Hermann, O.W.; Parks, C.V.

    1991-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs.

  9. Application of the Multi-Dimensional Surface Water Modeling System at Bridge 339, Copper River Highway, Alaska

    USGS Publications Warehouse

    Brabets, Timothy P.; Conaway, Jeffrey S.

    2009-01-01

    The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would likely fill in during high flows. Extending Bridge 339 would accommodate higher discharges and re-align flow to the bridge.

  10. Confirmatory factor analysis of the supports intensity scale for children.

    PubMed

    Verdugo, Miguel A; Guillén, Verónica M; Arias, Benito; Vicente, Eva; Badia, Marta

    2016-01-01

    Support needs assessment instruments and recent research related to this construct have been more focused on adults with intellectual disability than on children. However, the design and implementation of Individualized Support Plans (ISP) must start at an early age. Currently, a project for the translation, adaptation and validation of the supports intensity scale for children (SIS-C) is being conducted in Spain. In this study, the internal structure of the scale was analyzed to shed light on the nature of this construct when evaluated in childhood. A total of 814 children with intellectual disability between 5 and 16 years of age participated in the study. Their support need level was assessed by the SIS-C, and a confirmatory factor analysis (CFA), including different hypotheses, was carried out to identify the optimal factorial structure of this scale. The CFA results indicated that a unidimensional model is not sufficient to explain our data structure. On the other hand, goodness-of-fit indices showed that both correlated first-order factors and higher-order factor models of the construct could explain the data obtained from the scale. Specifically, a better fit of our data with the correlated first-order factors model was found. These findings are similar to those identified in previous analyses performed with adults. Implications and directions for further research are discussed. PMID:26707926

  11. Multi-dimensional finite element code for the analysis of coupled fluid energy, and solute transport (CFEST)

    SciTech Connect

    Gupta, S.K.; Kincaid, C.T.; Meyer, P.R.; Newbill, C.A.; Cole, C.R.

    1982-08-01

    The Seasonal Thermal Energy Storage Program is being conducted for the Department of Energy by Pacific Northwest Laboratory. A major thrust of this program has been the study of natural aquifers as hosts for thermal energy storage and retrieval. Numerical simulation of the nonisothermal response of the host media is fundamental to the evaluation of proposed experimental designs and field test results. This report represents the primary documentation for the coupled fluid, energy and solute transport (CFEST) code. Sections of this document are devoted to the conservation equations and their numerical analogues, the input data requirements, and the verification studies completed to date.

  12. Bayesian analysis of spatially-dependent functional responses with spatially-dependent multi-dimensional functional predictors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...

  13. Multi-Dimensional Analysis of the Forced Bubble Dynamics Associated with Bubble Fusion Phenomena. Final Topical Report

    SciTech Connect

    Lahey, Jr., Richard T.; Jansen, Kenneth E.; Nagrath, Sunitha

    2002-12-02

    A new adaptive grid, 3-D FEM hydrodynamic shock (ie, HYDRO )code called PHASTA-2C has been developed and used to investigate bubble implosion phenomena leading to ultra-high temperatures and pressures. In particular, it was shown that nearly spherical bubble compressions occur during bubble implosions and the predicted conditions associated with a recent ORNL Bubble Fusion experiment [Taleyarkhan et al, Science, March, 2002] are consistent with the occurrence of D/D fusion.

  14. Assessing the Primary Schools--A Multi-Dimensional Approach: A School Level Analysis Based on Indian Data

    ERIC Educational Resources Information Center

    Sengupta, Atanu; Pal, Naibedya Prasun

    2012-01-01

    Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We

  15. Simulator for unconventional gas resources multi-dimensional model SUGAR-MD. Volume I. Reservoir model analysis and validation

    SciTech Connect

    Not Available

    1982-01-01

    The Department of Energy, Morgantown Energy Technology Center, has been supporting the development of flow models for Devonian shale gas reservoirs. The broad objectives of this modeling program are: (1) To develop and validate a mathematical model which describes gas flow through Devonian shales. (2) To determine the sensitive parameters that affect deliverability and recovery of gas from Devonian shales. (3) To recommend laboratory and field measurements for determination of those parameters critical to the productivity and timely recovery of gas from the Devonian shales. (4) To analyze pressure and rate transient data from observation and production gas wells to determine reservoir parameters and well performance. (5) To study and determine the overall performance of Devonian shale reservoirs in terms of well stimulation, well spacing, and resource recovery as a function of gross reservoir properties such as anisotropy, porosity and thickness variations, and boundary effects. The flow equations that are the mathematical basis of the two-dimensional model are presented. It is assumed that gas transport to producing wells in Devonian shale reservoirs occurs through a natural fracture system into which matrix blocks of contrasting physical properties deliver contained gas. That is, the matrix acts as a uniformly distributed gas source in a fracture medium. Gas desorption from pore walls is treated as a uniformly distributed source within the matrix blocks. 24 references.

  16. Scaling analysis for the investigation of slip mechanisms in nanofluids

    NASA Astrophysics Data System (ADS)

    Savithiri, S.; Pattamatta, Arvind; Das, Sarit K.

    2011-07-01

    The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.

  17. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.

  18. Two-field analysis of no-scale supergravity inflation

    SciTech Connect

    Ellis, John; Garca, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu

    2015-01-01

    Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Khler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values ||0.1. We also calculate the non-Gaussianity measure f{sub NL}, finding that is well below the current experimental sensitivity.

  19. Reactor Physics Methods and Analysis Capabilities in SCALE

    SciTech Connect

    Mark D. DeHart; Stephen M. Bowman

    2011-05-01

    The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.

  20. Reactor Physics Methods and Analysis Capabilities in SCALE

    SciTech Connect

    DeHart, Mark D; Bowman, Stephen M

    2011-01-01

    The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.

  1. Scaling and dimensional analysis of acoustic streaming jets

    SciTech Connect

    Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.

    2014-09-15

    This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.

  2. A research on analysis method of land environment big data storage based on air-earth-life

    NASA Astrophysics Data System (ADS)

    Lu, Yanling; Li, Jingwen

    2015-12-01

    Many problems of land environment in urban development, with the support of 3S technology, the research of land environment evolved into the stage of spatial-temporal scales. This paper combining space, time and attribute features in land environmental change, with elements of "air-earth-life" framework for the study of pattern, researching the analysis method of land environment big data storage due to the limitations of traditional processing method in land environment spatial-temporal data, to reflect the organic couping relationship among the multi-dimensional elements in land environment and provide the theory basis of data storage for implementing big data analysis application platform in land environment.

  3. Dehazing method through polarimetric imaging and multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin

    2015-05-01

    An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.

  4. Dimensional analysis scaling of impact craters in unconsolidated granular materials

    NASA Astrophysics Data System (ADS)

    Dowling, David R.; Dowling, Thomas R.

    2013-11-01

    Dimensional analysis is a general technique for determining how the independent parameters that describe physical phenomena must be arranged to produce dimensionally self-consistent results. This presentation describes how dimensional analysis may be successfully applied to the formation of impact craters produced by dropping spherical objects into a bed of unconsolidated granular material. The experiment is simple and safe, and laboratory results for different impact energies (0.001 to 1.6 J), seven different spheres (masses from 4 to 64 grams, diameters from 1.0 to 4.3 cm), and two different dry granular materials (granulated sugar, and playground sand) may be collapsed to a single power-law using parametric scaling determined from dimensional analysis. Thus, impact crater formation may provide a useful validation test for simulations of granular material dynamics. Interestingly, the scaling law shows that the impacting sphere's diameter is not a parameter. And, the resulting power law can be extrapolated, with some success, over more than 16 orders of magnitude to produce an independent estimate of the impact energy that formed the 1.2-km-diameter Barringer Meteor Crater in northern Arizona.

  5. Automated Sholl analysis of digitized neuronal morphology at multiple scales.

    PubMed

    Kutzing, Melinda K; Langhammer, Christopher G; Luo, Vincent; Lakdawala, Hersh; Firestein, Bonnie L

    2010-01-01

    Neuronal morphology plays a significant role in determining how neurons function and communicate. Specifically, it affects the ability of neurons to receive inputs from other cells and contributes to the propagation of action potentials. The morphology of the neurites also affects how information is processed. The diversity of dendrite morphologies facilitate local and long range signaling and allow individual neurons or groups of neurons to carry out specialized functions within the neuronal network. Alterations in dendrite morphology, including fragmentation of dendrites and changes in branching patterns, have been observed in a number of disease states, including Alzheimer's disease, schizophrenia, and mental retardation. The ability to both understand the factors that shape dendrite morphologies and to identify changes in dendrite morphologies is essential in the understanding of nervous system function and dysfunction. Neurite morphology is often analyzed by Sholl analysis and by counting the number of neurites and the number of branch tips. This analysis is generally applied to dendrites, but it can also be applied to axons. Performing this analysis by hand is both time consuming and inevitably introduces variability due to experimenter bias and inconsistency. The Bonfire program is a semi-automated approach to the analysis of dendrite and axon morphology that builds upon available open-source morphological analysis tools. Our program enables the detection of local changes in dendrite and axon branching behaviors by performing Sholl analysis on subregions of the neuritic arbor. For example, Sholl analysis is performed on both the neuron as a whole as well as on each subset of processes (primary, secondary, terminal, root, etc.) Dendrite and axon patterning is influenced by a number of intracellular and extracellular factors, many acting locally. Thus, the resulting arbor morphology is a result of specific processes acting on specific neurites, making it necessary to perform morphological analysis on a smaller scale in order to observe these local variations. The Bonfire program requires the use of two open-source analysis tools, the NeuronJ plugin to ImageJ and NeuronStudio. Neurons are traced in ImageJ, and NeuronStudio is used to define the connectivity between neurites. Bonfire contains a number of custom scripts written in MATLAB (MathWorks) that are used to convert the data into the appropriate format for further analysis, check for user errors, and ultimately perform Sholl analysis. Finally, data are exported into Excel for statistical analysis. A flow chart of the Bonfire program is shown in Figure 1. PMID:21113115

  6. A Multi-scale Approach to Urban Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Gluch, Renne; Quattrochi, Dale A.

    2005-01-01

    An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.

  7. Scaling analysis of biogeochemical parameters in coastal waters

    NASA Astrophysics Data System (ADS)

    Zongo, Sylvie; Schmitt, Franois

    2010-05-01

    Monitoring data are very useful for rapidly providing quality controlled measurements of many environmental aquatic, and thus understanding the spatio-temporal structure which governs the dynamics. We consider here the long time biogeochemical time series from automatic continuous monitoring. These biogeochemical time series from in Eastern English Channel: coastal waters, estuarine waters and river waters. In the first analysis, we consider data from the MAREL system (Automatic monitoring network): MAREL Carnot buoy that is situated in the coastal waters of Boulogne-sur-mer with data from the Honfleur MAREL buoy (an estuarine station in the bay of Seine). Marel system is based on the deployment of data buoys having marine water analysis capabilities on an automated mode. It is equipped with high performance technologies for water analysis and real time data transmission and record many parameters at fixed locations: temperature, dissolved Oxygen (DO), pH, chlorophyll a (Chla), salinity with high frequency resolution (10 or 20 minutes). We consider also the data from Wimereux river off Boulogne-sur mer. Two sets of data were recorded in the river Wimereux in downstream and upstream using a temperature, dissolved oxygen, turbidity and salinity sensors. This monitoring provided an approach of spatial temporal functional dynamism, with these two zones: the first is represented by downstream related to hydrodynamic marine; the second is related to the upstream flow waters. All these time series reveal large fluctuations at many time scales. The large number of data provided by the sensors enables the estimation of Fourier spectral analysis, in order to consider the dominant frequencies associated to the dynamics. This shows the impact of turbulence and of the tidal cycle on the high variability of these parameters. These spectra show quite nice scaling regimes which are compared to the one of temperature, as a reference turbulent passive scalar.

  8. Investigation of Biogrout processes by numerical analysis at pore scale

    NASA Astrophysics Data System (ADS)

    Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.

    2013-04-01

    Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be studied and eventually observations may be extrapolated to a larger continuum scale.

  9. Psychometric analysis of the Ten-Item Perceived Stress Scale.

    PubMed

    Taylor, John M

    2015-03-01

    Although the 10-item Perceived Stress Scale (PSS-10) is a popular measure, a review of the literature reveals 3 significant gaps: (a) There is some debate as to whether a 1- or a 2-factor model best describes the relationships among the PSS-10 items, (b) little information is available on the performance of the items on the scale, and (c) it is unclear whether PSS-10 scores are subject to gender bias. These gaps were addressed in this study using a sample of 1,236 adults from the National Survey of Midlife Development in the United States II. Based on self-identification, participants were 56.31% female, 77% White, 17.31% Black and/or African American, and the average age was 54.48 years (SD = 11.69). Findings from an ordinal confirmatory factor analysis suggested the relationships among the items are best described by an oblique 2-factor model. Item analysis using the graded response model provided no evidence of item misfit and indicated both subscales have a wide estimation range. Although t tests revealed a significant difference between the means of males and females on the Perceived Helplessness Subscale (t = 4.001, df = 1234, p < .001), measurement invariance tests suggest that PSS-10 scores may not be substantially affected by gender bias. Overall, the findings suggest that inferences made using PSS-10 scores are valid. However, this study calls into question inferences where the multidimensionality of the PSS-10 is ignored. PMID:25346996

  10. Analysis of a Two Wrap Meso Scale Scroll Pump

    NASA Astrophysics Data System (ADS)

    Moore, Eric J.; Muntz, E. Phillip; Erye, Francis; Myung, Nosang; Orient, Otto; Shcheglov, Kirill; Wiberg, Dean

    2003-05-01

    The scroll pump is an interesting positive displacement pump. One scroll in the form of an Archimedes spiral moves with respect to another, similarly shaped stationary scroll, forming a peristaltic pumping action. The moving scroll traces an orbital path but is maintained at a constant angular orientation. Pockets of gas are forced along the fixed scroll from its periphery, eventually reaching the center where the gas is discharged. A model of a multi-wrap scroll pump was created and applied to predict pumping performance. Meso-scale scroll pumps have been proposed for use as roughing pumps in mobile, sampling mass spectrometer systems. The main objective of the present analysis is to obtain estimates of a scroll pump's performance, taking into account the effect of manufacturing tolerances, in order to determine if the meso scale scroll pump will meet the necessarily small power and volume requirements associated with mobile, sampling mass spectrometer systems. The analysis involves developing the governing equations for the pump in terms of several operating parameters, taking into account the leaks to and from the trapped gasses as they are displaced to the discharge port. The power and volume required for pumping tasks is also obtained in terms of the operating parameters and pump size. Performance evaluations such as power and volume per unit of pumped gas upflow are obtained.

  11. Analysis of hydrological triggered clayey landslides by small scale experiments

    NASA Astrophysics Data System (ADS)

    Spickermann, A.; Malet, J.-P.; van Asch, T. W. J.; Schanz, T.

    2010-05-01

    Hydrological processes, such as slope saturation by water, are a primary cause of landslides. This effect can occur in the form of e.g. intense rainfall, snowmelt or changes in ground-water levels. Hydrological processes can trigger a landslide and control subsequent movement. In order to forecast potential landslides, it is important to know both the mechanism leading to failure, to evaluate whether a slope will fail or not, and the mechanism that control the movement of the failure mass, to estimate how much material will move in which time. Despite numerous studies which have been done there is still uncertainty in the explanation of the processes determining the failure and post-failure. Background and motivation of the study is the Barcelonnette area that is part of the Ubaye Valley in the South French Alps which is highly affected by hydrological-controlled landslides in reworked black marls. Since landslide processes are too complex to understand it only by field observation experiments and computer calculations are used. The main focus of this work is to analyse the initialization of failure and the post-failure behaviour of hydrological triggered landslides in clays by small-scale experiments, namely by small-scale flume tests and centrifuge tests. Although a lot of effort is made to investigate the landslide problem by either small-scale or even large-scale slope experiments there is still no optimal solution. Small-scale flume tests are often criticised because of their scale-effect problems dominant in dense sands and cohesive material and boundary problems. By means of centrifuge tests the scale problem with respect to stress conditions is overcome. But also centrifuge testing is accompanied with problems. The objectives of the work are 1) to review potential failure and post-failure mechanisms, 2) to evaluate small-scale experiments, namely flume and centrifuge tests in the analysis of the failure behaviour in clayey slopes and 3) to interpret the failure behaviour and possible mechanisms in tests on Zoelen clay and black marls by numerical calculations. After a general view of mechanisms that might initialise failure and mechanisms that might determine post-failure motion relevant for landslides occurring in non-cohesive and cohesive slopes, the performed tests on reworked black marls are presented. The problems and restrictions of both test methods are explained and discussed strategies for future tests given. The assumed mechanisms that might trigger failure and control post-failure motion that have been observed in the tests are examined by numerical modelling. It is shown that the results of the numerical simulation give an important contribution to the interpretation of the experimental observations and to the evaluation of the small-scale experiments.

  12. Surface Roughness from Point Clouds - A Multi-Scale Analysis

    NASA Astrophysics Data System (ADS)

    Milenkovi?, Milutin; Ressl, Camillo; Hollaus, Markus; Pfeifer, Norbert

    2013-04-01

    Roughness is a physical parameter of surfaces which should include the surface complexity in geophysical models. In hydrodynamic modeling, e.g., roughness should estimate the resistance caused by the surface on the flow, or in remote sensing, how the signal is scattered. Roughness needs to be estimated as a parameter of the model. This has been identified as main source of the uncertainties in model prediction, mainly due to the errors that follow a traditional roughness estimation, e.g. from surface profiles, or by a visual interpretation and manual delineation from aerial photos. Currently, roughness estimation is shifting towards point clouds of surfaces, which primarily come from laser scanning and image matching techniques. However, those data sets are also not free of errors and may affect roughness estimation. Our study focusses on the estimation of roughness indices from different point clouds, and the uncertainties that follow such a procedure. The analysis is performed on a graveled surface of a river bed in Eastern Austria, using point clouds acquired by a triangulating laser scanner (Minolta Vivid 910), photogrammetry (DSLR camera), and terrestrial laser scanner (Riegl FWF scanner). To enable their comparison, all the point clouds are transformed to a superior coordinate system. Then, different roughness indices are calculated and compared at different scales, including stochastic and features-based indices like RMS of elevation, std.dev., Peak to Valley height, openness. The analysis is additionally supported with the spectral signatures (frequency domain) of the different point clouds. The selected techniques provide point clouds of different resolution (0.1-10cm) and coverage (0.3-10m), which also justifies the multi-scale roughness analysis. By doing this, it becomes possible to differentiate between the measurement errors and the roughness of the object at the resolutions of the point clouds. Parts of this study have been funded by the project NEWFOR in the framework of European Territorial Cooperation Alpine Space.

  13. Scaled models in the analysis of fire-structure interaction

    NASA Astrophysics Data System (ADS)

    Andreozzi, A.; Bianco, N.; Musto, M.; Rotondo, G.

    2015-11-01

    A fire problem has been scaled both in terms of geometry, boundary conditions and materials thermophysical properties by means of dimensionless parameters. Both the full and scaled models have been solved numerically for two different fire power values. Results obtained by means of the full scale model and the scaled one are compared in terms of velocity and temperature profiles in order to assess the reliability of the scaled model to represent the behavior of the full scale one.

  14. Multi-dimensional Upwind Fluctuation Splitting Scheme with Mesh Adaption for Hypersonic Viscous Flow. Degree awarded by Virginia Polytechnic Inst. and State Univ., 9 Nov. 2001

    NASA Technical Reports Server (NTRS)

    Wood, William A., III

    2002-01-01

    A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.

  15. Reliability analysis of a utility-scale solar power plant

    NASA Astrophysics Data System (ADS)

    Kolb, G. J.

    1992-10-01

    This paper presents the results of a reliability analysis for a solar central receiver power plant that employs a salt-in-tube receiver. Because reliability data for a number of critical plant components have only recently been collected, this is the first time a credible analysis can be performed. This type of power plant will be built by a consortium of western US utilities led by the Southern California Edison Company. The 10 MW plant is known as Solar Two and is scheduled to be on-line in 1994. It is a prototype which should lead to the construction of 100 MW commercial-scale plants by the year 2000. The availability calculation was performed with the UNIRAM computer code. The analysis predicted a forced outage rate of 5.4 percent and an overall plant availability, including scheduled outages, of 91 percent. The code also identified the most important contributors to plant unavailability. Control system failures were identified as the most important cause of forced outages. Receiver problems were rated second with turbine outages third. The overall plant availability of 91 percent exceeds the goal identified by the US utility study. This paper discuses the availability calculation and presents evidence why the 91 percent availability is a credible estimate.

  16. Large-scale dimension densities for heart rate variability analysis

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jrgen

    2006-04-01

    In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (?ls?=0.970.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.650.13 ; EH, 0.540.05 ; YH, 0.570.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ?ls? (day, 0.650.13 ; night, 0.660.12 ; n.s.) in contrast to healthy controls (day, 0.540.05 ; night, 0.610.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.

  17. The effect of temporal observation scale on extreme rainfall analysis

    NASA Astrophysics Data System (ADS)

    Camarasa, A. M.; Soriano, J.

    2009-09-01

    Mediterranean storms usually show high intensity and irregularity of rainfall. A single torrential event can double, even triplicate, the average annual rainfall. These features, in turn, determine rainfall-runoff conversion and other hydrological processes. As a consequence flash-floods and hydrological behaviour of ephemeral streams are dominated by these extreme events. However the internal structure of storms varies according to the time scale at which data are collected. As the observation interval is reduced, intensity becomes more significant and emphasizes the concentrated character of the precipitation. Moreover, an equivalent amount of rainfall, registered at different time scales, can result in different rainfall spatial patterns, and it can help to indentify which factors are important at each time measurement scale. This paper analyzes the temporal and spatial variations of rainfall pattern, associated with different time scales in data collection. The study area involves the whole territory of the River Jcar Water Authority. This area covers a surface of 43.000 km2, and shows different geographical features (topography ranking from 2000 to 0 m, sea influence, inland and coastal territories, different exposure to wet winds, etc....). Rainfall data are collected, every five minutes, by the Automatic Hydrological Information System (SAIH), from 147 rain gauges, covering a 13 years continuous period (1994-2007). Precipitation data have been rescaling in order to obtain rainfall parameters every five minutes, 15 minutes, 30 minutes, 1 hour, 6 hours, 12 hours and 24 hours. Indicators of cumulative rainfall, maximum intensity, irregularity, probability of rain and persistence of rain have been estimated for every time scale. From a time scale perspective, results show that there are two variables, "cumulative rainfall" and "probability of rain", that follow a positive logarithmic trend, time-dependent. "Cumulative rainfall" shows a change of trend at 6 hours time scale, while the variable "probability of rain" changes its trend after 1 hour. Variables of "maximum intensity", "irregularity" and "persistence of rain" show negative trends, fitting power curves functions time-dependent. All of these variables show a change of trend after 1 hour. Regarding the spatial pattern of these variables qualitative analysis have been made. Result show changes in the factors influencing this pattern, depending on the length of the measurement time interval. Thus, for the variables of "cumulative rainfall" and "maximum intensity", increasing time interval implies a reduction of the area affected by the maximum values. Moreover, in the interval of 5 minutes the factor altitude is determinant, while for longer time intervals, factors as distance to the sea and the orographic structures exposure to wet dominant winds gain importance. The "irregularity" shows, for 5 minutes, the highest values in the plains near the sea, (exposed to wind of component E) and in the first line of relief or valleys opened to the sea (which acts as a trigger of instability). As the time interval increases, other factors, as distance to the sea, the effect of a second inland alignment of relieves, and the exposure to wet wind of component NE, become important. Concerning the "probability of rain" the interval of five minutes, shows the importance of exposure to wet winds of component NE plus the effect of the relief as a trigger. As the time interval increases, the presence of mountainous area in combination with wind of component W acquires prominence. The "persistence of rain" is related to the distance to the sea from the first mountainous alignments and to the exposure to winds of components NE and SE. As the time interval increases the persistence of higher values are reduced to the area exposed to winds of component NE in combination with the effect of the relief as a trigger. Finally, although the results are preliminary, authors would remark their great applicability to detect thresholds of different rainfall behaviour and its spatial distribution in order to estimate indicators for water management.

  18. 13C metabolic flux analysis at a genome-scale.

    PubMed

    Gopalakrishnan, Saratram; Maranas, Costas D

    2015-11-01

    Metabolic models used in 13C metabolic flux analysis generally include a limited number of reactions primarily from central metabolism. They typically omit degradation pathways, complete cofactor balances, and atom transition contributions for reactions outside central metabolism. This study addresses the impact on prediction fidelity of scaling-up mapping models to a genome-scale. The core mapping model employed in this study accounts for (75 reactions and 65 metabolites) primarily from central metabolism. The genome-scale metabolic mapping model (GSMM) (697 reaction and 595 metabolites) is constructed using as a basis the iAF1260 model upon eliminating reactions guaranteed not to carry flux based on growth and fermentation data for a minimal glucose growth medium. Labeling data for 17 amino acid fragments obtained from cells fed with glucose labeled at the second carbon was used to obtain fluxes and ranges. Metabolic fluxes and confidence intervals are estimated, for both core and genome-scale mapping models, by minimizing the sum of square of differences between predicted and experimentally measured labeling patterns using the EMU decomposition algorithm. Overall, we find that both topology and estimated values of the metabolic fluxes remain largely consistent between core and GSM model. Stepping up to a genome-scale mapping model leads to wider flux inference ranges for 20 key reactions present in the core model. The glycolysis flux range doubles due to the possibility of active gluconeogenesis, the TCA flux range expanded by 80% due to the availability of a bypass through arginine consistent with labeling data, and the transhydrogenase reaction flux was essentially unresolved due to the presence of as many as five routes for the inter-conversion of NADPH to NADH afforded by the genome-scale model. By globally accounting for ATP demands in the GSMM model the unused ATP decreased drastically with the lower bound matching the maintenance ATP requirement. A non-zero flux for the arginine degradation pathway was identified to meet biomass precursor demands as detailed in the iAF1260 model. Inferred ranges for 81% of the reactions in the genome-scale metabolic (GSM) model varied less than one-tenth of the basis glucose uptake rate (95% confidence test). This is because as many as 411 reactions in the GSM are growth coupled meaning that the single measurement of biomass formation rate locks the reaction flux values. This implies that accurate biomass formation rate and composition are critical for resolving metabolic fluxes away from central metabolism and suggests the importance of biomass composition (re)assessment under different genetic and environmental backgrounds. In addition, the loss of information associated with mapping fluxes from MFA on a core model to a GSM model is quantified. PMID:26358840

  19. Anomaly Detection in Multiple Scale for Insider Threat Analysis

    SciTech Connect

    Kim, Yoohwan; Sheldon, Frederick T; Hively, Lee M

    2012-01-01

    We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.

  20. Isogeometric analysis based on scaled boundary finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Lin, G.; Hu, Z. Q.

    2010-06-01

    This paper presents a new approach which possesses the semi-analytical feature of scaled boundary finite element method and the exact geometry feature of isogeometric analysis. NURBS basis functions are employed to construct an exact boundary geometry. The domain boundary is discretized by NURBS curves for the 2D case, and NURBS surfaces for the 3D case. Especially the closed-form NURBS curves or surfaces are needed if there are no side-faces. The strategy of using finite elements on domain boundary with NURBS shape functions for approximation of both boundary geometry and displacements arises from the sense of isoparametric concept. With h-,p-,k- refinement strategy implemented, the geometry is refined with maintaining exact geometry at all levels, so the geometry is the same exact represented as the initial geometry imported from CAD system without the necessity of subsequent communication with a CAD system. Additionally, numerical example exhibits that flexible continuity within the NURBS patch rather than traditional shape functions improves continuity and accuracy of derivative stress and strain field across not only boundary elements but also domain elements, as the results of the combination of the intrinsic analytical property along radial direction and the higher continuity property of NURBS basis, i.e. it's more powerful in accuracy of solution and less DOF-consuming than either traditional finite element method or scaled boundary finite element method.

  1. Scaling law analysis of paraffin thin films on different surfaces

    NASA Astrophysics Data System (ADS)

    Dotto, M. E. R.; Camargo, S. S.

    2010-01-01

    The dynamics of paraffin deposit formation on different surfaces was analyzed based on scaling laws. Carbon-based films were deposited onto silicon (Si) and stainless steel substrates from methane (CH4) gas using radio frequency plasma enhanced chemical vapor deposition. The different substrates were characterized with respect to their surface energy by contact angle measurements, surface roughness, and morphology. Paraffin thin films were obtained by the casting technique and were subsequently characterized by an atomic force microscope in noncontact mode. The results indicate that the morphology of paraffin deposits is strongly influenced by substrates used. Scaling laws analysis for coated substrates present two distinct dynamics: a local roughness exponent (?local) associated to short-range surface correlations and a global roughness exponent (?global) associated to long-range surface correlations. The local dynamics is described by the Wolf-Villain model, and a global dynamics is described by the Kardar-Parisi-Zhang model. A local correlation length (Llocal) defines the transition between the local and global dynamics with Llocal approximately 700nm in accordance with the spacing of planes measured from atomic force micrographs. For uncoated substrates, the growth dynamics is related to Edwards-Wilkinson model.

  2. Parallel Index and Query for Large Scale Data Analysis

    SciTech Connect

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  3. Scaling law analysis of paraffin thin films on different surfaces

    SciTech Connect

    Dotto, M. E. R.; Camargo, S. S. Jr.

    2010-01-15

    The dynamics of paraffin deposit formation on different surfaces was analyzed based on scaling laws. Carbon-based films were deposited onto silicon (Si) and stainless steel substrates from methane (CH{sub 4}) gas using radio frequency plasma enhanced chemical vapor deposition. The different substrates were characterized with respect to their surface energy by contact angle measurements, surface roughness, and morphology. Paraffin thin films were obtained by the casting technique and were subsequently characterized by an atomic force microscope in noncontact mode. The results indicate that the morphology of paraffin deposits is strongly influenced by substrates used. Scaling laws analysis for coated substrates present two distinct dynamics: a local roughness exponent ({alpha}{sub local}) associated to short-range surface correlations and a global roughness exponent ({alpha}{sub global}) associated to long-range surface correlations. The local dynamics is described by the Wolf-Villain model, and a global dynamics is described by the Kardar-Parisi-Zhang model. A local correlation length (L{sub local}) defines the transition between the local and global dynamics with L{sub local} approximately 700 nm in accordance with the spacing of planes measured from atomic force micrographs. For uncoated substrates, the growth dynamics is related to Edwards-Wilkinson model.

  4. Application of wavelet transforms to reservoir data analysis and scaling

    SciTech Connect

    Panda, M.N.; Mosher, C.; Chopra, A.K.

    1996-12-31

    General characterization of physical systems uses two aspects of data analysis methods: decomposition of empirical data to determine model parameters and reconstruction of the image using these characteristic parameters. Spectral methods, involving a frequency based representation of data, usually assume stationarity. These methods, therefore, extract only the average information and hence are not suitable for analyzing data with isolated or deterministic discontinuities, such as faults or fractures in reservoir rocks or image edges in computer vision. Wavelet transforms provide a multiresolution framework for data representation. They are a family of orthogonal basis functions that separate a function or a signal into distinct frequency packets that are localized in the time domain. Thus, wavelets are well suited for analyzing non-stationary data. In other words, a projection of a function or a discrete data set onto a time-frequency space using wavelets shows how the function behaves at different scales of measurement. Because wavelets have compact support, it is easy to apply this transform to large data sets with minimal computations. We apply the wavelet transforms to one-dimensional and two-dimensional permeability data to determine the locations of layer boundaries and other discontinuities. By binning in the time-frequency plane with wavelet packets, permeability structures of arbitrary size are analyzed. We also apply orthogonal wavelets for scaling up of spatially correlated heterogeneous permeability fields.

  5. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  6. Genome-scale network analysis of imprinted human metabolic genes.

    PubMed

    Sigurdsson, Martin I; Jamshidi, Neema; Jonsson, Jon J; Palsson, Bernhard O

    2009-01-01

    System analysis of metabolic network reconstructions can be used to calculate functional states or phenotypes. This provides tools to study the metabolic effects of genetic and epigenetic properties, such as dosage sensitivity. We used the genome-scale reconstruction of human metabolism (Recon 1) to analyze the effect of nine known or predicted imprinted genes on metabolic phenotypes. Simulations of maternal deletion of ATP10A indicated an anabolic metabolism consistent with the known clinical phenotypes of obesity. The abnormal expression of the other genes affected fewer subsections of metabolism consistent with a lack of established clinical phenotypes. We found that four of nine genes had metabolic effect as predicted by the Haig's parental conflict theory. PMID:19218833

  7. Large scale rigidity-based flexibility analysis of biomolecules.

    PubMed

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  8. Multidimensional scaling analysis of the dynamics of a country economy.

    PubMed

    Tenreiro Machado, J A; Mata, Maria Eugénia

    2013-01-01

    This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132

  9. Large-scale quantitative analysis of painting arts.

    PubMed

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  10. Large-Scale Quantitative Analysis of Painting Arts

    NASA Astrophysics Data System (ADS)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  11. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  12. Multidimensional Scaling Analysis of the Dynamics of a Country Economy

    PubMed Central

    Mata, Maria Eugénia

    2013-01-01

    This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132

  13. Application of multi-dimensional discrimination diagrams and probability calculations to Paleoproterozoic acid rocks from Brazilian cratons and provinces to infer tectonic settings

    NASA Astrophysics Data System (ADS)

    Verma, Sanjeet K.; Oliveira, Elson P.

    2013-08-01

    In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case studies on Cassiterita-Tabuões, Ritápolis, São Tiago-Rezende Costa (south of São Francisco craton, Minas Gerais) showed a collision setting, which agrees fairly reasonably with a syn-collision tectonic setting indicated in the literature. A within-plate setting is suggested for the Serrinha magmatic suite, Mineiro belt (south of São Francisco craton, Minas Gerais), contrasting markedly with the arc setting suggested in the literature. The ninth case study on Rio Itapicuru granites and Rio Capim dacites (north of São Francisco craton, Serrinha block, Bahia) showed a continental arc setting. The tenth case study indicated within-plate setting for Rio dos Remédios volcanic rocks (São Francisco craton, Bahia), which is compatible with these rocks being the initial, rift-related igneous activity associated with the Chapada Diamantina cratonic cover. The eleventh, twelfth and thirteenth case studies on Bom Jesus-Areal granites, Rio Diamante-Rosilha dacite-rhyolite and Timbozal-Cantão granites (São Luís craton) showed continental arc, within-plate and collision settings, respectively. Finally, the last two case studies, fourteenth and fifteenth showed a collision setting for Caicó Complex and continental arc setting for Algodões (Borborema province).

  14. The effect of noise on computer-aided measures of voice: a comparison of CSpeechSP and the Multi-Dimensional Voice Program software using the CSL 4300B Module and Multi-Speech for Windows.

    PubMed

    Carson, Cecyle Perry; Ingrisano, Dennis R S; Eggleston, K Donald

    2003-03-01

    The effect of noise on computer-derived samples of voice was compared across three different hardware/software configurations. The hardware/software systems included a stand-alone A/D converter (CSL Module 4300B) coupled to a custom Pentium PC used in conjunction with the Multi-Dimensional Voice Program (MDVP) software, and a Creative Labs A/D converter coupled to the same custom PC under software control of MDVP/Multispeech and CSpeechSP. Voice samples were taken from 10 female subjects, then mixed with computer fan noise creating three different signal-to-noise (S/N) levels. Mixed signals were analyzed on the three hardware/software systems. Results revealed that fundamental frequency was most resistant to the degradation effect of noise across systems; jitter and shimmer values, however, were more variable across all configurations. Jitter and shimmer values were significantly higher under certain S/N levels for the MDVP 4300B based system as compared to MDVP for Multi-Speech and CSpeechSP. The findings punctuate the need for sensitivity to recording environments, careful selection of hardware/software equipment arrays, and the establishment of minimal recording conditions (>25 dBA S/N) for voice sampling and analysis using computer-assisted methods. PMID:12705815

  15. Spatial data analysis for exploration of regional scale geothermal resources

    NASA Astrophysics Data System (ADS)

    Moghaddam, Majid Kiavarz; Noorollahi, Younes; Samadzadegan, Farhad; Sharifi, Mohammad Ali; Itoi, Ryuichi

    2013-10-01

    Defining a comprehensive conceptual model of the resources sought is one of the most important steps in geothermal potential mapping. In this study, Fry analysis as a spatial distribution method and 5% well existence, distance distribution, weights of evidence (WofE), and evidential belief function (EBFs) methods as spatial association methods were applied comparatively to known geothermal occurrences, and to publicly-available regional-scale geoscience data in Akita and Iwate provinces within the Tohoku volcanic arc, in northern Japan. Fry analysis and rose diagrams revealed similar directional patterns of geothermal wells and volcanoes, NNW-, NNE-, NE-trending faults, hotsprings and fumaroles. Among the spatial association methods, WofE defined a conceptual model correspondent with the real world situations, approved with the aid of expert opinion. The results of the spatial association analyses quantitatively indicated that the known geothermal occurrences are strongly spatially-associated with geological features such as volcanoes, craters, NNW-, NNE-, NE-direction faults and geochemical features such as hotsprings, hydrothermal alteration zones and fumaroles. Geophysical data contains temperature gradients over 100 C/km and heat flow over 100 mW/m2. In general, geochemical and geophysical data were better evidence layers than geological data for exploring geothermal resources. The spatial analyses of the case study area suggested that quantitative knowledge from hydrothermal geothermal resources was significantly useful for further exploration and for geothermal potential mapping in the case study region. The results can also be extended to the regions with nearly similar characteristics.

  16. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

  17. Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis

    ERIC Educational Resources Information Center

    Chow, Kui Foon; Kennedy, Kerry John

    2014-01-01

    International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a

  18. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. IV. The neutrino signal

    SciTech Connect

    Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de

    2014-06-10

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

  19. An Analysis and Synthesis of Multiple Correspondence Analysis, Optimal Scaling, Dual Scaling, Homogeneity Analysis and Other Methods for Quantifying Categorical Multivariate Data.

    ERIC Educational Resources Information Center

    Tenenhaus, Michel; Young, Forrest W.

    1985-01-01

    Several methods for quantifying categorical multivariate data (including multiple correspondence analysis and optimal scaling) are discussed and shown to be equivalent. The method can be applied by variable matrix with categorical variables, by item matrix of multiple-choice data, or by a multi-way contingency table. (NSF)

  20. Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.

    ERIC Educational Resources Information Center

    Faulstich, Michael E.; And Others

    1985-01-01

    Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)

  1. Large-scale reconstruction and phylogenetic analysis of metabolic environments

    PubMed Central

    Borenstein, Elhanan; Kupiec, Martin; Feldman, Marcus W.; Ruppin, Eytan

    2008-01-01

    The topology of metabolic networks may provide important insights not only into the metabolic capacity of species, but also into the habitats in which they evolved. Here we introduce the concept of a metabolic network's “seed set”—the set of compounds that, based on the network topology, are exogenously acquired—and provide a methodological framework to computationally infer the seed set of a given network. Such seed sets form ecological “interfaces” between metabolic networks and their surroundings, approximating the effective biochemical environment of each species. Analyzing the metabolic networks of 478 species and identifying the seed set of each species, we present a comprehensive large-scale reconstruction of such predicted metabolic environments. The seed sets' composition significantly correlates with several basic properties characterizing the species' environments and agrees with biological observations concerning major adaptations. Species whose environments are highly predictable (e.g., obligate parasites) tend to have smaller seed sets than species living in variable environments. Phylogenetic analysis of the seed sets reveals the complex dynamics governing gain and loss of seeds across the phylogenetic tree and the process of transition between seed and non-seed compounds. Our findings suggest that the seed state is transient and that seeds tend either to be dropped completely from the network or to become non-seed compounds relatively fast. The seed sets also permit a successful reconstruction of a phylogenetic tree of life. The “reverse ecology” approach presented lays the foundations for studying the evolutionary interplay between organisms and their habitats on a large scale. PMID:18787117

  2. Revision and Factor Analysis of a Death Anxiety Scale.

    ERIC Educational Resources Information Center

    Thorson, James A.; Powell, F. C.

    Earlier research on death anxiety using the 34-item scale developed by Nehrke-Templer-Boyar (NTB) indicated that females and younger persons have significantly higher death anxiety. To simplify a death anxiety scale for use with different age groups, and to determine the conceptual factors actually measured by the scale, a revised 25-item…

  3. Measuring Mathematics Anxiety: Psychometric Analysis of a Bidimensional Affective Scale

    ERIC Educational Resources Information Center

    Bai, Haiyan; Wang, LihShing; Pan, Wei; Frey, Mary

    2009-01-01

    The purpose of this study is to develop a theoretically and methodologically sound bidimensional affective scale measuring mathematics anxiety with high psychometric quality. The psychometric properties of a 14-item Mathematics Anxiety Scale-Revised (MAS-R) adapted from Betz's (1978) 10-item Mathematics Anxiety Scale were empirically analyzed on a

  4. A scaling analysis of thermoacoustic convection in a zero-gravity environment

    SciTech Connect

    Krane, R.J.; Parang, M.

    1982-01-01

    This paper presents a scaling analysis of a one-dimensional thermoacoustic convection heat transfer process in a zero-gravity environment. The relative importance of the terms in the governing equations is discussed for different time scales without attempting to solve the equations. The scaling analysis suggests certain generalizations that can be made in this class of heat transfer problems.

  5. Confirmatory Factor Analysis of the Educators' Attitudes toward Educational Research Scale

    ERIC Educational Resources Information Center

    Ozturk, Mehmet Ali

    2011-01-01

    This article reports results of a confirmatory factor analysis performed to cross-validate the factor structure of the Educators' Attitudes Toward Educational Research Scale. The original scale had been developed by the author and revised based on the results of an exploratory factor analysis. In the present study, the revised scale was given to…

  6. Large-Scale Candidate Gene Analysis of HDL Particle Features

    PubMed Central

    Kaess, Bernhard M.; Tomaszewski, Maciej; Braund, Peter S.; Stark, Klaus; Rafelt, Suzanne; Fischer, Marcus; Hardwick, Robert; Nelson, Christopher P.; Debiec, Radoslaw; Huber, Fritz; Kremer, Werner; Kalbitzer, Hans Robert; Rose, Lynda M.; Chasman, Daniel I.; Hopewell, Jemma; Clarke, Robert; Burton, Paul R.; Tobin, Martin D.

    2011-01-01

    Background HDL cholesterol (HDL-C) is an established marker of cardiovascular risk with significant genetic determination. However, HDL particles are not homogenous, and refined HDL phenotyping may improve insight into regulation of HDL metabolism. We therefore assessed HDL particles by NMR spectroscopy and conducted a large-scale candidate gene association analysis. Methodology/Principal Findings We measured plasma HDL-C and determined mean HDL particle size and particle number by NMR spectroscopy in 2024 individuals from 512 British Caucasian families. Genotypes were 49,094 SNPs in >2,100 cardiometabolic candidate genes/loci as represented on the HumanCVD BeadChip version 2. False discovery rates (FDR) were calculated to account for multiple testing. Analyses on classical HDL-C revealed significant associations (FDR<0.05) only for CETP (cholesteryl ester transfer protein; lead SNP rs3764261: p?=?5.6*10?15) and SGCD (sarcoglycan delta; rs6877118: p?=?8.6*10?6). In contrast, analysis with HDL mean particle size yielded additional associations in LIPC (hepatic lipase; rs261332: p?=?6.1*10?9), PLTP (phospholipid transfer protein, rs4810479: p?=?1.7*10?8) and FBLN5 (fibulin-5; rs2246416: p?=?6.2*10?6). The associations of SGCD and Fibulin-5 with HDL particle size could not be replicated in PROCARDIS (n?=?3,078) and/or the Women's Genome Health Study (n?=?23,170). Conclusions We show that refined HDL phenotyping by NMR spectroscopy can detect known genes of HDL metabolism better than analyses on HDL-C. PMID:21283740

  7. Heterogeneous reactions over fractal surfaces: A multifractal scaling analysis

    SciTech Connect

    Lee, Shyi-Long; Lee, Chung-Kung

    1996-12-31

    Monte Carlo simulations of modified Eley-Rideal mechanisms possessing decay-type and enhance-type sticking probabilities as well as a three-step catalytic reaction over fractal surfaces were performed to examine the morphological effect on the above-mentioned surface reactions. Effects of decay and enhancing profiles on reaction probability distribution for Eley-Rideal reactions as well as effects of varying probability of reaction steps and cluster sizes on the normalized selectivity distribution for the three-step reaction were then analyzed by multifractal scaling techniques. For the Eley-Rideal mechanism, it is found that reaction probability distribution tends to be spatially uniform at fast decay and rather concentrated at faster enhancing rate. For the three-step reaction, increase of cluster size is found to lower the position sensitivity of normalized selectivity distribution. Large dimerization to isomerization ratio increases position distinction among active sites as the adsorption probability equals to 1. At small adsorption probability, the dimerization/isomerization ratio causes no effect on the normalized selectivity distribution. Heterogeneity of surfaces as reflected in the multifractal analysis will also be discussed.

  8. Full-scale testing and analysis of fuselage structure

    NASA Technical Reports Server (NTRS)

    Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.

    1994-01-01

    This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.

  9. MIXREGLS: A Program for Mixed-Effects Location Scale Analysis.

    PubMed

    Hedeker, Donald; Nordgren, Rachel

    2013-03-01

    MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062

  10. MicroScale Thermophoresis: Interaction analysis and beyond

    NASA Astrophysics Data System (ADS)

    Jerabek-Willemsen, Moran; Andr, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis

    2014-12-01

    MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.

  11. Numerical Simulation and Scaling Analysis of Cell Printing

    NASA Astrophysics Data System (ADS)

    Qiao, Rui; He, Ping

    2011-11-01

    Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.

  12. Genome-scale metabolic models: reconstruction and analysis.

    PubMed

    Baart, Gino J E; Martens, Dirk E

    2012-01-01

    Metabolism can be defined as the complete set of chemical reactions that occur in living organisms in order to maintain life. Enzymes are the main players in this process as they are responsible for catalyzing the chemical reactions. The enzyme-reaction relationships can be used for the reconstruction of a network of reactions, which leads to a metabolic model of metabolism. A genome-scale metabolic network of chemical reactions that take place inside a living organism is primarily reconstructed from the information that is present in its genome and the literature and involves steps such as functional annotation of the genome, identification of the associated reactions and determination of their stoichiometry, assignment of localization, determination of the biomass composition, estimation of energy requirements, and definition of model constraints. This information can be integrated into a stoichiometric model of metabolism that can be used for detailed analysis of the metabolic potential of the organism using constraint-based modeling approaches and hence is valuable in understanding its metabolic capabilities. PMID:21993642

  13. Preliminary Study of the CAS-LIBB Single-Particle Microbeam II Endstation: I. Proposed Multi-Dimensional Quantitative Fluorescence Microscopy

    NASA Astrophysics Data System (ADS)

    Hu, Zhiwen; Xu, Yongjian; Yu, Zengliang

    2006-05-01

    Single-particle microbeam as a powerful tool can open a research field to find answers to many enigmas in radiobiology. A single-particle microbeam facility has been constructed at the Key Laboratory of Ion Beam Bioengineering (LIBB), Chinese Academy of Sciences (CAS), China. However there has been less research activities in this field concerning the original process of the interaction between low-energy ions and complicated organisms. To address this challenge, an in situ multi-dimensional quantitative fluorescence microscopy system combined with the CAS-LIBB single-particle microbeam II endstation is proposed. In this article, the rationale, logistics and development of many aspects of the proposed system are discussed.

  14. [Study on "multi-dimensional structure and process dynamics quality control system" of Danshen infusion solution based on component structure theory].

    PubMed

    Feng, Liang; Zhang, Ming-Hua; Gu, Jun-Fei; Wang, Gui-You; Zhao, Zi-Yu; Jia, Xiao-Bin

    2013-11-01

    As traditional Chinese medicine (TCM) preparation products feature complex compounds and multiple preparation processes, the implementation of quality control in line with the characteristics of TCM preparation products provides a firm guarantee for the clinical efficacy and safety of TCM preparation products. Danshen infusion solution is a preparation commonly used in clinic, but its quality control is restricted to indexes of finished products, which can not guarantee its inherent quality. Our study group has proposed "multi-dimensional structure and process dynamics quality control system" on the basis of "component structure theory", for the purpose of controlling the quality of Danshen infusion solution at multiple levels and in multiple links from the efficacy-related material basis, the safety-related material basis, the characteristics of dosage form to the preparation process. This article, we bring forth new ideas and models to the quality control of TCM preparation products. PMID:24494543

  15. Nanometer to Centimeter Scale Analysis and Modeling of Pore Structures

    NASA Astrophysics Data System (ADS)

    Wesolowski, D. J.; Anovitz, L.; Vlcek, L.; Rother, G.; Cole, D. R.

    2011-12-01

    The microstructure and evolution of pore space in rocks is a critically important factor controlling fluid flow. The size, distribution and connectivity of these confined geometries dictate how fluids including H2O and CO2, migrate into and through these micro- and nano-environments, wet and react with the solid. (Ultra)small-angle neutron scattering and autocorrelations derived from BSE imaging provide a method of quantifying pore structures in a statistically significant manner from the nanometer to the centimeter scale. Multifractal analysis provides additional constraints. These methods were used to characterize the pore features of a variety of potential CO2 geological storage formations and geothermal systems such as the shallow buried quartz arenites from the St. Peter Sandstone and the deeper Mt. Simon quartz arenite in Ohio as well as the Eau Claire shale and mudrocks from the Cranfield MS CO2 injection test and the normal temperature and high-temperature vapor-dominated parts of the Geysers geothermal system in California. For example, analyses of samples of St. Peter sandstone show total porosity correlates with changes in pores structure including pore size ratios, surface fractal dimensions, and lacunarity. These samples contain significant large-scale porosity, modified by quartz overgrowths, and neutron scattering results show significant sub-micron porosity, which may make up fifty percent or more of the total pore volume. While previous scattering data from sandstones suggest scattering is dominated by surface fractal behavior, our data are both fractal and pseudo-fractal. The scattering curves are composed of steps, modeled as polydispersed assemblages of pores with log-normal distributions. In some samples a surface-fractal overprint is present. There are also significant changes in the mono and multifractal dimensions of the pore structure as the pore fraction decreases. There are strong positive correlations between D(0) and image and total scattering porosities, and strong negative correlations between these and multifractality, which increases as pore fraction decreases and the percent of (U)SANS porosity increases. Individual fractal dimensions at all q values from the BSE images decrease during silcrete formation. These data suggest that microporosity is more prevalent and may play a much more important role than previously thought in fluid/rock interactions in coarse-grained sandstone. Preliminary results from shale and mudrocks indicate there are dramatic differences not only in terms of total micro- to nano-porosity, but also in terms of pore surface fractal (roughness) and mass fractal (pore distributions) dimensions as well as size distributions. Information from imaging and scattering data can also be used to constrain computer-generated, random, three-dimensional porous structures. The results integrate various sources of experimental information and are statistically compatible with the real rock. This allows a more detailed multiscale analysis of structural correlations in the material. Acknowledgements. Research sponsored by the Division of Chemical Sciences, Geosciences and Biosciences, Office of Basic Energy Sciences, U.S. Department of Energy.

  16. Reliability and Validity Analysis of the Multiple Intelligence Perception Scale

    ERIC Educational Resources Information Center

    Yesil, Rustu; Korkmaz, Ozgen

    2010-01-01

    This study mainly aims to develop a scale to determine individual intelligence profiles based on self-perceptions. The study group consists of 925 students studying in various departments of the Faculty of Education at Ahi Evran University. A logical and statistical approach was adopted in scale development. Expert opinion was obtained for the

  17. Analysis of Hydrogen Depletion Using a Scaled Passive Autocatalytic Recombiner

    SciTech Connect

    Blanchat, T.K.; Malliakos, A.

    1998-10-28

    Hydrogen depletion tests of a scaled passive autocatalytic recombine (pAR) were performed in the Surtsey test vessel at Sandia National Laboratories (SNL). The experiments were used to determine the hydrogen depletion rate of a PAR in the presence of steam and also to evaluate the effect of scale (number of cartridges) on the PAR performance at both low and high hydrogen concentrations.

  18. Confirmatory Factor Analysis of the Geriatric Depression Scale

    ERIC Educational Resources Information Center

    Adams, Kathryn Betts; Matto, Holly C.; Sanders, Sara

    2004-01-01

    Purpose: The Geriatric Depression Scale (GDS) is widely used in clinical and research settings to screen older adults for depressive symptoms. Although several exploratory factor analytic structures have been proposed for the scale, no independent confirmation has been made available that would enable investigators to confidently identify scores

  19. Regional Scale Analysis of Extremes in an SRM Geoengineering Simulation

    NASA Astrophysics Data System (ADS)

    Muthyala, R.; Bala, G.

    2014-12-01

    Only a few studies in the past have investigated the statistics of extreme events under geoengineering. In this study, a global climate model is used to investigate the impact of solar radiation management on extreme precipitation events on regional scale. Solar constant was reduced by 2.25% to counteract the global mean surface temperature change caused by a doubling of CO2 (2XCO2) from its preindustrial control value. Using daily precipitation rates, extreme events are defined as those which exceed 99.9th percentile precipitation threshold. Extremes are substantially reduced in geoengineering simulation: the magnitude of change is much smaller than those that occur in a simulation with doubled CO2. Regional analysis over 22 Giorgi land regions is also performed. Doubling of CO2 leads to an increase in intensity of extreme (99.9th percentile) precipitation by 17.7% on global-mean basis with maximum increase in intensity over South Asian region by 37%. In the geoengineering simulation, there is a global-mean reduction in intensity of 3.8%, with a maximum reduction over Tropical Ocean by 8.9%. Further, we find that the doubled CO2 simulation shows an increase in the frequency of extremes (>50 mm/day) by 50-200% with a global mean increase of 80%. In contrast, in geo-engineering climate there is a decrease in frequency of extreme events by 20% globally with a larger decrease over Tropical Ocean by 30%. In both the climate states (2XCO2 and geo-engineering) change in "extremes" is always greater than change in "means" over large domains. We conclude that changes in precipitation extremes are larger in 2XCO2 scenario compared to preindustrial climate while extremes decline slightly in the geoengineered climate. We are also investigating the changes in extreme statistics for daily maximum and minimum temperature, evapotranspiration and vegetation productivity. Results will be presented at the meeting.

  20. GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY

    SciTech Connect

    Lee, S

    2008-05-28

    Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

  1. A theoretical analysis of basin-scale groundwater temperature distribution

    NASA Astrophysics Data System (ADS)

    An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2015-03-01

    The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814

  2. Impact and fracture analysis of fish scales from Arapaima gigas.

    PubMed

    Torres, F G; Malsquez, M; Troncoso, O P

    2015-06-01

    Fish scales from the Amazonian fish Arapaima gigas have been characterised to study their impact and fracture behaviour at three different environmental conditions. Scales were cut in two different directions to analyse the influence of the orientation of collagen layers. The energy absorbed during impact tests was measured for each sample and SEM images were taken after each test in order to analyse the failure mechanisms. The results showed that scales tested at cryogenic temperatures display fragile behaviour, while scales tested at room temperature did not fracture. Different failure mechanisms have been identified, analysed and compared with the failure modes that occur in bone. The impact energy obtained for fish scales was two to three times higher than the values reported for bone in the literature. PMID:25842120

  3. Motivation and Engagement in the United States, Canada, United Kingdom, Australia, and China: Testing a Multi-Dimensional Framework

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J.

    2015-01-01

    This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory…

  4. Motivation and Engagement in the United States, Canada, United Kingdom, Australia, and China: Testing a Multi-Dimensional Framework

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J.

    2015-01-01

    This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory

  5. Scale

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2009-01-01

    The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail

  6. Scaling parameters for PFBC cyclone separator system analysis

    SciTech Connect

    Gil, A.; Romeo, L.M.; Cortes, C.

    1999-07-01

    Laboratory-scale cold flow models have been used extensively to study the behavior of many installations. In particular, fluidized bed cold flow models have allowed developing the knowledge of fluidized bed hydrodynamics. In order for the results of the research to be relevant to commercial power plants, cold flow models must be properly scaled. Many efforts have been made to understand the performance of fluidized beds, but up to now no attention has been paid in developing the knowledge of cyclone separator systems. CIRCE has worked on the development of scaling parameters to enable laboratory-scale equipment operating at room temperature to simulate the performance of cyclone separator systems. This paper presents the simplified scaling parameters and experimental comparison of a cyclone separator system and a cold flow model constructed and based on those parameters. The cold flow model has been used to establish the validity of the scaling laws for cyclone separator systems and permits detailed room temperature studies (determining the filtration effects of varying operating parameters and cyclone design) to be performed in a rapid and cost effective manner. This valuable and reliable design tool will contribute to a more rapid and concise understanding of hot gas filtration systems based on cyclones. The study of the behavior of the cold flow model, including observation and measurements of flow patterns in cyclones and diplegs will allow characterizing the performance of the full-scale ash removal system, establishing safe limits of operation and testing design improvements.

  7. Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer

    NASA Astrophysics Data System (ADS)

    Schnieders, Jana; Garbe, Christoph

    2014-05-01

    The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale processes on interfacial transport and relate it to gas transfer. References [1] T. G. Bell, W. De Bruyn, S. D. Miller, B. Ward, K. Christensen, and E. S. Saltzman. Air-sea dimethylsulfide (DMS) gas transfer in the North Atlantic: evidence for limited interfacial gas exchange at high wind speed. Atmos. Chem. Phys. , 13:11073-11087, 2013. [2] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013.

  8. A Critical Analysis of the Concept of Scale Dependent Macrodispersivity

    NASA Astrophysics Data System (ADS)

    Zech, A.; Attinger, S.; Cvetkovic, V.; Dagan, G.; Dietrich, P.; Fiori, A.; Rubin, Y.; Teutsch, G.

    2014-12-01

    Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity ?L (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally ?L is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime).It was surmised in the literature that ?L(L) scales up with travel distance following a universal scaling law. Attempts to define the scaling law were pursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992.Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011).Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported ?L values. Our investigation concludes that transport, and particularly ?L(L), is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.

  9. A Critical Analysis of the Concept of Scale Dependent Macrodispersivity

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Cvetkovic, Vladimir; Dagan, Gedeon; Dietrich, Peter; Fiori, Aldo; Rubin, Yoram; Teutsch, Georg

    2015-04-01

    Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity ?L (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally ?L is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime). It was surmised in the literature that ?L scales up with travel distance L following a universal scaling law. Attempts to define the scaling law were sursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992. Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011). Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported ?L values. Our investigation concludes that transport, and particularly ?L, is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.

  10. Multidimensional analysis of the large-scale segregation of luminosity

    SciTech Connect

    Dominguez-Tenreiro, R.; Martinez, V.J.

    1989-04-01

    The multidimensional or multifractal formalism has been applied to analyze the CfA catalog. The spectrum of scaling indices and the generalized dimensions D(q) have been found to be scale-invariant in certain scaling regions. This invariance means that the multidimensional formalism is a good tool to characterize galaxy distributions. By means of this formalism it has been found that CfA galaxies brighter than about M(c) = -20(H0 = 100 km/s/Mpc) are most clustered than fainter galaxies. 42 refs.

  11. Lattice analysis for the energy scale of QCD phenomena.

    PubMed

    Yamamoto, Arata; Suganuma, Hideo

    2008-12-12

    We formulate a new framework in lattice QCD to study the relevant energy scale of QCD phenomena. By considering the Fourier transformation of link variable, we can investigate the intrinsic energy scale of a physical quantity nonperturbatively. This framework is broadly available for all lattice QCD calculations. We apply this framework for the quark-antiquark potential and meson masses in quenched lattice QCD. The gluonic energy scale relevant for the confinement is found to be less than 1 GeV in the Landau or Coulomb gauge. PMID:19113611

  12. Analysis and measurement about atmospheric turbulent outer scale

    NASA Astrophysics Data System (ADS)

    Zong, Fei; Hu, Yuehong; Chang, Jinyong; Feng, Shuanglian; Qiang, Xiwen

    2015-10-01

    The objectives of this tutorial are to introduce an optical method of measuring atmospheric turbulent outer scale. The method utilizes the ratio between the correlation functions of the wandering in two perpendicular planes. A simple relationship to obtain the outer scale from the measured correlation functions is established for a particular model of turbulence, the modified Von Karman model. Base on the rational conclusion, an implementary project of measuring atmospheric turbulent outer scale with optical method is designed. At the same time, the plan of the experiment system is given. By predigesting the model, the calculating program of the measurement is written also.

  13. MDI-GPU: accelerating integrative modelling for genomic-scale data using GP-GPU computing.

    PubMed

    Mason, Samuel A; Sayyid, Faiz; Kirk, Paul D W; Starr, Colin; Wild, David L

    2016-03-01

    The integration of multi-dimensional datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct - but often complementary - information. However, the large amount of data adds burden to any inference task. Flexible Bayesian methods may reduce the necessity for strong modelling assumptions, but can also increase the computational burden. We present an improved implementation of a Bayesian correlated clustering algorithm, that permits integrated clustering to be routinely performed across multiple datasets, each with tens of thousands of items. By exploiting GPU based computation, we are able to improve runtime performance of the algorithm by almost four orders of magnitude. This permits analysis across genomic-scale data sets, greatly expanding the range of applications over those originally possible. MDI is available here: http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. PMID:26910751

  14. An item response theory analysis of the Olweus Bullying scale.

    PubMed

    Breivik, Kyrre; Olweus, Dan

    2014-12-01

    In the present article, we used IRT (graded response) modeling as a useful technology for a detailed and refined study of the psychometric properties of the various items of the Olweus Bullying scale and the scale itself. The sample consisted of a very large number of Norwegian 4th-10th grade students (n = 48 926). The IRT analyses revealed that the scale was essentially unidimensional and had excellent reliability in the upper ranges of the latent bullying tendency trait, as intended and desired. Gender DIF effects were identified with regard to girls' use of indirect bullying by social exclusion and boys' use of physical bullying by hitting and kicking but these effects were small and worked in opposite directions, having negligible effects at the scale level. Also scale scores adjusted for DIF effects differed very little from non-adjusted scores. In conclusion, the empirical data were well characterized by the chosen IRT model and the Olweus Bullying scale was considered well suited for the conduct of fair and reliable comparisons involving different gender-age groups. Information Aggr. Behav. 9999:XX-XX, 2014. © 2014 Wiley Periodicals, Inc. PMID:25460720

  15. A Factor Analysis of the Laurelton Self-Concept Scale. Volume 1, Number 14.

    ERIC Educational Resources Information Center

    Harrison, Robert H.; Budoff, Milton

    Items from the Laurelton Self Concept Scale (LSCS) and the Locus of Control Scale for Children were administered to 172 male and female educable mental retardates to examine the LSCS by R factor analysis. It was found that the Self Concept Scale is factor analyzable when appropriately administered to educables. The small factors grouped into…

  16. Refining a self-assessment of informatics competency scale using Mokken scaling analysis.

    PubMed

    Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne

    2015-11-01

    Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions. PMID:26652630

  17. Rating Scale Analysis and Psychometric Properties of the Caregiver Self-Efficacy Scale for Transfers

    ERIC Educational Resources Information Center

    Cipriani, Daniel J.; Hensen, Francine E.; McPeck, Danielle L.; Kubec, Gina L. D.; Thomas, Julie J.

    2012-01-01

    Parents and caregivers faced with the challenges of transferring children with disability are at risk of musculoskeletal injuries and/or emotional stress. The Caregiver Self-Efficacy Scale for Transfers (CSEST) is a 14-item questionnaire that measures self-efficacy for transferring under common conditions. The CSEST yields reliable data and valid…

  18. Confirmatory Factor Analysis of the Scales for Diagnosing Attention Deficit Hyperactivity Disorder (SCALES)

    ERIC Educational Resources Information Center

    Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.

    2010-01-01

    The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…

  19. Confirmatory Factor Analysis of the Scales for Diagnosing Attention Deficit Hyperactivity Disorder (SCALES)

    ERIC Educational Resources Information Center

    Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.

    2010-01-01

    The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit

  20. The Kids' Empathic Development Scale (KEDS): A Multi-Dimensional Measure of Empathy in Primary School-Aged Children

    ERIC Educational Resources Information Center

    Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine

    2013-01-01

    Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic

  1. The Kids' Empathic Development Scale (KEDS): A Multi-Dimensional Measure of Empathy in Primary School-Aged Children

    ERIC Educational Resources Information Center

    Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine

    2013-01-01

    Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic…

  2. Review of multi-dimensional large-scale kinetic simulation and physics validation of ion acceleration in relativistic laser-matter interaction

    SciTech Connect

    Wu, Hui-Chun; Hegelich, B.M.; Fernandez, J.C.; Shah, R.C.; Palaniyappan, S.; Jung, D.; Yin, L; Albright, B.J.; Bowers, K.; Huang, C.; Kwan, T.J.

    2012-06-19

    Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.

  3. TMFA: A FORTRAN Program for Three-Mode Factor Analysis and Individual Differences Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Redfield, Joel

    1978-01-01

    TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)

  4. Stochastic averaging and sensitivity analysis for two scale reaction networks.

    PubMed

    Hashemi, Araz; Nez, Marcel; Plech?, Petr; Vlachos, Dionisios G

    2016-02-21

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process. PMID:26896973

  5. 'Scaling' analysis of the ice accretion process on aircraft surfaces

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.; Tabrizi, A. H.; Missimer, J. R.

    1982-01-01

    A comprehensive set of scaling parameters is developed for the ice accretion process by analyzing the energy equations of the dynamic freezing zone and the already frozen ice layer, the continuity equation associated with supercooled liquid droplets entering into and impacting within the dynamic freezing zone, and energy equation of the ice layer. No initial arbitrary judgments are made regarding the relative magnitudes of each of the terms. The method of intrinsic reference variables in employed in order to develop the appropriate scaling parameters and their relative significance in rime icing conditions in an orderly process, rather than utilizing empiricism. The significance of these parameters is examined and the parameters are combined with scaling criteria related to droplet trajectory similitude.

  6. Stochastic averaging and sensitivity analysis for two scale reaction networks

    NASA Astrophysics Data System (ADS)

    Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.

    2016-02-01

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.

  7. Scales

    ScienceCinema

    Murray Gibson

    2010-01-08

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  8. Scales

    SciTech Connect

    Murray Gibson

    2007-04-27

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  9. SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS

    SciTech Connect

    MICHAEL T. ITAMUA AND CLIFFORD K. HO

    1998-06-04

    The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment.

  10. A Factor Analysis of the Research Self-Efficacy Scale.

    ERIC Educational Resources Information Center

    Bieschke, Kathleen J.; And Others

    Counseling professionals' and counseling psychology students' interest in performing research seems to be waning. Identifying the impediments to graduate students' interest and participation in research is important if systematic efforts to engage them in research are to succeed. The Research Self-Efficacy Scale (RSES) was designed to measure…

  11. Introducing Scale Analysis by Way of a Pendulum

    ERIC Educational Resources Information Center

    Lira, Ignacio

    2007-01-01

    Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale

  12. Analysis of geomechanical behavior for the drift scale test

    SciTech Connect

    Blair, S C; Carlson, S R; Wagoner, J L

    2000-11-17

    The Yucca Mountain Site Characterization Project is conducting a drift scale heater test, known as the Drift Scale Test (DST), in an alcove of the Exploratory Studies Facility at Yucca Mountain, Nevada. The DST is a large-scale, long-term thermal test designed to investigate coupled thermal-mechanical-hydrological-chemical behavior in a fractured, welded tuff rock mass. The general layout of the DST is shown in Figure 1a, along with the locations of several of the boreholes being used to monitor deformation during the test. Electric heaters are being used to heat a planar region of rock that is approximately 50 m long and 27 m wide for 4 years, followed by 4 years of cooling. Both in-drift and ''wing'' heaters are being used to heat the rock. The heating portion of the DST was started in December, 1997, and the target drift wall temperature of 200 C was reached in summer 2000. A drift-scale distinct element model (DSDE) is being used to analyze the geomechanical response of the rock mass forming the DST. The distinct element method was chosen to permit explicit modeling of fracture deformations. Shear deformations and normal mode opening of fractures are expected to increase fracture permeability and thereby alter thermal-hydrologic behavior in the DST. This paper will describe the DSDE model and present preliminary results, including comparison of simulated and observed deformations, at selected locations within the test.

  13. Translation Analysis at the Genome Scale by Ribosome Profiling.

    PubMed

    Baudin-Baillieu, Agns; Hatin, Isabelle; Legendre, Rachel; Namy, Olivier

    2016-01-01

    Ribosome profiling is an emerging approach using deep sequencing of the mRNA part protected by the ribosome to study protein synthesis at the genome scale. This approach provides new insights into gene regulation at the translational level. In this review we describe the protocol to prepare polysomes and extract ribosome protected fragments before to deep sequence them. PMID:26483019

  14. Confirmatory Factor Analysis of the Work Locus of Control Scale

    ERIC Educational Resources Information Center

    Oliver, Joseph E.; Jose, Paul E.; Brough, Paula

    2006-01-01

    Original formulations of the Work Locus of Control Scale (WLCS) proposed a unidimensional structure of this measure; however, more recently, evidence for a two-dimensional structure has been reported, with separate subscales for internal and external loci of control. The current study evaluates the one- and two-factor models with confirmatory

  15. THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS

    EPA Science Inventory

    Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...

  16. A Multi-Dimensional Approach to Gradient Change in Phonological Acquisition: A Case Study of Disordered Speech Development

    ERIC Educational Resources Information Center

    Glaspey, Amy M.; MacLeod, Andrea A. N.

    2010-01-01

    The purpose of the current study is to document phonological change from a multidimensional perspective for a 3-year-old boy with phonological disorder by comparing three measures: (1) accuracy of consonant productions, (2) dynamic assessment, and (3) acoustic analysis. The methods included collecting a sample of the targets /s, [image omitted],

  17. A Multi-Dimensional Approach to Gradient Change in Phonological Acquisition: A Case Study of Disordered Speech Development

    ERIC Educational Resources Information Center

    Glaspey, Amy M.; MacLeod, Andrea A. N.

    2010-01-01

    The purpose of the current study is to document phonological change from a multidimensional perspective for a 3-year-old boy with phonological disorder by comparing three measures: (1) accuracy of consonant productions, (2) dynamic assessment, and (3) acoustic analysis. The methods included collecting a sample of the targets /s, [image omitted],…

  18. Large-scale computations in analysis of structures

    SciTech Connect

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  19. SU-E-T-472: A Multi-Dimensional Measurements Comparison to Analyze a 3D Patient Specific QA Tool

    SciTech Connect

    Ashmeg, S; Jackson, J; Zhang, Y; Oldham, M; Yin, F; Ren, L

    2014-06-01

    Purpose: To quantitatively evaluate a 3D patient specific QA tool using 2D film and 3D Presage dosimetry. Methods: A brain IMRT case was delivered to Delta4, EBT2 film and Presage plastic dosimeter. The film was inserted in the solid water slabs at 7.5cm depth for measurement. The Presage dosimeter was inserted into a head phantom for 3D dose measurement. Delta4's Anatomy software was used to calculate the corresponding dose to the film in solid water slabs and to Presage in the head phantom. The results from Anatomy were compared to both calculated results from Eclipse and measured dose from film and Presage to evaluate its accuracy. Using RIT software, we compared the “Anatomy” dose to the EBT2 film measurement and the film measurement to ECLIPSE calculation. For 3D analysis, DICOM file of “Anatomy” was extracted and imported to CERR software, which was used to compare the Presage dose to both “Anatomy” calculation and ECLIPSE calculation. Gamma criteria of 3% - 3mm and 5% - 5mm was used for comparison. Results: Gamma passing rates of film vs “Anatomy”, “Anatomy” vs ECLIPSE and film vs ECLIPSE were 82.8%, 70.9% and 87.6% respectively when 3% - 3mm criteria is used. When the criteria is changed to 5% - 5mm, the passing rates became 87.8%, 76.3% and 90.8% respectively. For 3D analysis, Anatomy vs ECLIPSE showed gamma passing rate of 86.4% and 93.3% for 3% - 3mm and 5% - 5mm respectively. The rate is 77.0% for Presage vs ECLIPSE analysis. The Anatomy vs ECLIPSE were absolute dose comparison. However, film and Presage analysis were relative comparison Conclusion: The results show higher passing rate in 3D than 2D in “Anatomy” software. This could be due to the higher degrees of freedom in 3D than in 2D for gamma analysis.

  20. Characterizing multi-dimensionality of urban sprawl in Jamnagar, India using multi-date remote sensing data

    NASA Astrophysics Data System (ADS)

    Jain, G.; Sharma, S.; Vyas, A.; Rajawat, A. S.

    2014-11-01

    This study attempts to measure and characterize urban sprawl using its multiple dimensions in the Jamnagar city, India. The study utilized the multi-date satellite images acquired by CORONA, IRS 1D PAN & LISS-3, IRS P6 LISS-4 and Resourcesat-2 LISS-4 sensors. The extent of urban growth in the study area was mapped at 1 : 25,000 scale for the years 1965, 2000, 2005 and 2011. The growth of urban areas was further categorized into infill growth, expansion and leapfrog development. The city witnessed an annual growth of 1.60 % per annum during the period 2000-2011 whereas the population growth during the same period was observed at less than 1.0 % per annum. The new development in the city during 2000-2005 time period comprised of 22 % as infill development, 60 % as extension of the peripheral urbanized areas, and 18 % as leapfrogged development. However, during 2005-2011 timeframe the proportion of leapfrog development increased to 28 % whereas due to decrease in availability of developable area within the city, the infill developments declined to 9 %. The urban sprawl in Jamnagar city was further characterized on the basis of five dimensions of sprawl viz. population density, continuity, clustering, concentration and centrality by integrating the population data with sprawl for year 2001 and 2011. The study characterised the growth of Jamnagar as low density and low concentration outwardly expansion.

  1. Scale Issues in Remote Sensing: A Review on Analysis, Processing and Modeling

    PubMed Central

    Wu, Hua; Li, Zhao-Liang

    2009-01-01

    With the development of quantitative remote sensing, scale issues have attracted more and more the attention of scientists. Research is now suffering from a severe scale discrepancy between data sources and the models used. Consequently, both data interpretation and model application become difficult due to these scale issues. Therefore, effectively scaling remotely sensed information at different scales has already become one of the most important research focuses of remote sensing. The aim of this paper is to demonstrate scale issues from the points of view of analysis, processing and modeling and to provide technical assistance when facing scale issues in remote sensing. The definition of scale and relevant terminologies are given in the first part of this paper. Then, the main causes of scale effects and the scaling effects on measurements, retrieval models and products are reviewed and discussed. Ways to describe the scale threshold and scale domain are briefly discussed. Finally, the general scaling methods, in particular up-scaling methods, are compared and summarized in detail. PMID:22573986

  2. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  3. Wavelet multiscale analysis for Hedge Funds: Scaling and strategies

    NASA Astrophysics Data System (ADS)

    Conlon, T.; Crane, M.; Ruskin, H. J.

    2008-09-01

    The wide acceptance of Hedge Funds by Institutional Investors and Pension Funds has led to an explosive growth in assets under management. These investors are drawn to Hedge Funds due to the seemingly low correlation with traditional investments and the attractive returns. The correlations and market risk (the Beta in the Capital Asset Pricing Model) of Hedge Funds are generally calculated using monthly returns data, which may produce misleading results as Hedge Funds often hold illiquid exchange-traded securities or difficult to price over-the-counter securities. In this paper, the Maximum Overlap Discrete Wavelet Transform (MODWT) is applied to measure the scaling properties of Hedge Fund correlation and market risk with respect to the S&P 500. It is found that the level of correlation and market risk varies greatly according to the strategy studied and the time scale examined. Finally, the effects of scaling properties on the risk profile of a portfolio made up of Hedge Funds is studied using correlation matrices calculated over different time horizons.

  4. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    SciTech Connect

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.

  5. A WENO-limited, ADER-DT, finite-volume scheme for efficient, robust, and communication-avoiding multi-dimensional transport

    NASA Astrophysics Data System (ADS)

    Norman, Matthew R.

    2014-10-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.

  6. Time-splitting pseudo-spectral domain decomposition method for the soliton solutions of the one- and multi-dimensional nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Taleei, Ameneh; Dehghan, Mehdi

    2014-06-01

    In this paper, we study the simulation of nonlinear Schrödinger equation in one, two and three dimensions. The proposed method is based on a time-splitting method that decomposes the original problem into two parts, a linear equation and a nonlinear equation. The linear equation in one dimension is approximated with the Chebyshev pseudo-spectral collocation method in space variable and the Crank-Nicolson method in time; while the nonlinear equation with constant coefficients can be solved exactly. As the goal of the present paper is to study the nonlinear Schrödinger equation in the large finite domain, we propose a domain decomposition method. In comparison with the single-domain, the multi-domain methods can produce a sparse differentiation matrix with fewer memory space and less computations. In this study, we choose an overlapping multi-domain scheme. By applying the alternating direction implicit technique, we extend this efficient method to solve the nonlinear Schrödinger equation both in two and three dimensions, while for the solution at each time step, it only needs to solve a sequence of linear partial differential equations in one dimension, respectively. Several examples for one- and multi-dimensional nonlinear Schrödinger equations are presented to demonstrate high accuracy and capability of the proposed method. Some numerical experiments are reported which show that this scheme preserves the conservation laws of charge and energy.

  7. Neuron-Glia Interaction as a Possible Glue to Translate the Mind-Brain Gap: A Novel Multi-Dimensional Approach Toward Psychology and Psychiatry

    PubMed Central

    Kato, Takahiro A.; Watabe, Motoki; Kanba, Shigenobu

    2013-01-01

    Neurons and synapses have long been the dominant focus of neuroscience, thus the pathophysiology of psychiatric disorders has come to be understood within the neuronal doctrine. However, the majority of cells in the brain are not neurons but glial cells including astrocytes, oligodendrocytes, and microglia. Traditionally, neuroscientists regarded glial functions as simply providing physical support and maintenance for neurons. Thus, in this limited role glia had been long ignored. Recently, glial functions have been gradually investigated, and increasing evidence has suggested that glial cells perform important roles in various brain functions. Digging up the glial functions and further understanding of these crucial cells, and the interaction between neurons and glia may shed new light on clarifying many unknown aspects including the mind-brain gap, and conscious-unconscious relationships. We briefly review the current situation of glial research in the field, and propose a novel translational research with a multi-dimensional model, combining various experimental approaches such as animal studies, in vitro & in vivo neuron-glia studies, a variety of human brain imaging investigations, and psychometric assessments. PMID:24155727

  8. Neuron-glia interaction as a possible glue to translate the mind-brain gap: a novel multi-dimensional approach toward psychology and psychiatry.

    PubMed

    Kato, Takahiro A; Watabe, Motoki; Kanba, Shigenobu

    2013-01-01

    Neurons and synapses have long been the dominant focus of neuroscience, thus the pathophysiology of psychiatric disorders has come to be understood within the neuronal doctrine. However, the majority of cells in the brain are not neurons but glial cells including astrocytes, oligodendrocytes, and microglia. Traditionally, neuroscientists regarded glial functions as simply providing physical support and maintenance for neurons. Thus, in this limited role glia had been long ignored. Recently, glial functions have been gradually investigated, and increasing evidence has suggested that glial cells perform important roles in various brain functions. Digging up the glial functions and further understanding of these crucial cells, and the interaction between neurons and glia may shed new light on clarifying many unknown aspects including the mind-brain gap, and conscious-unconscious relationships. We briefly review the current situation of glial research in the field, and propose a novel translational research with a multi-dimensional model, combining various experimental approaches such as animal studies, in vitro & in vivo neuron-glia studies, a variety of human brain imaging investigations, and psychometric assessments. PMID:24155727

  9. Two scale analysis applied to low permeability sandstones

    NASA Astrophysics Data System (ADS)

    Davy, Catherine; Song, Yang; Nguyen Kim, Thang; Adler, Pierre

    2015-04-01

    Low permeability materials are often composed of several pore structures of various scales, which are superposed one to another. It is often impossible to measure and to determine the macroscopic properties in one step. In the low permeability sandstones that we consider, the pore space is essentially made of micro-cracks between grains. These fissures are two dimensional structures, which aperture is roughly on the order of one micron. On the grain scale, i.e., on the scale of 1 mm, the fissures form a network. These two structures can be measured by using two different tools [1]. The density of the fissure networks is estimated by trace measurements on the two dimensional images provided by classical 2D Scanning Electron Microscopy (SEM) with a pixel size of 2.2 micron. The three dimensional geometry of the fissures is measured by X-Ray micro-tomography (micro-CT) in the laboratory, with a voxel size of 0.6x0.6x0.6microns3. The macroscopic permeability is calculated in two steps. On the small scale, the fracture transmissivity is calculated by solving the Stokes equation on several portions of the measured fissures by micro-CT. On the large scale, the density of the fissures is estimated by three different means based on the number of intersections with scanlines, on the surface density of fissures and on the intersections between fissures per unit surface. These three means show that the network is relatively isotropic and they provide very close estimations of the density. Then, a general formula derived from systematic numerical computations [2] is used to derive the macroscopic dimensionless permeability which is proportional to the fracture transmissivity. The combination of the two previous results yields the dimensional macroscopic permeability which is found to be in acceptable agreement with the experimental measurements. Some extensions of these preliminary works will be presented as a tentative conclusion. References [1] Z. Duan, C. A. Davy, F. Agostini, L. Jeannin, D. Troadec, F. Skoczylas, Hydraulic cut-off and gas recovery potential of sandstones from Tight Gas Reservoirs: a laboratory investigation, International Journal of Rock Mechanics and Mining Science, Vol.65, pp.75-85, 2014. [2] P.M. Adler, J.-F. Thovert, V.V. Mourzenko: Fractured porous media, Oxford University Press, 2012.

  10. Dimensionality of the Hospital Anxiety and Depression Scale (HADS) in Cardiac Patients: Comparison of Mokken Scale Analysis and Factor Analysis

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.

    2012-01-01

    The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken…

  11. Dimensionality of the Hospital Anxiety and Depression Scale (HADS) in Cardiac Patients: Comparison of Mokken Scale Analysis and Factor Analysis

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.

    2012-01-01

    The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken

  12. Field-aligned currents' scale analysis performed with the Swarm constellation

    NASA Astrophysics Data System (ADS)

    Lhr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter

    2015-01-01

    We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvn waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.

  13. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.

  14. Effect of a multi-dimensional flux function on the monotonicity of Euler and Navier-Stokes computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Van Leer, Bram; Roe, Philip L.

    1991-01-01

    A limiting method has been devised for a grid-independent flux function for use with the two-dimensional Euler and Navier-Stokes equations. This limiting is derived from a monotonicity analysis of the model and allows for solutions with reduced oscillatory behavior while still maintaining sharper resolution than a grid-aligned method. In addition to capturing oblique waves sharply, the grid-independent flux function also reduces the entropy generated over an airfoil in an Euler computation and reduces pressure distortions in the separated boundary layer of a viscous-flow airfoil computation. The model has also been extended to three dimensions, although no angle-limiting procedure for improving monotonicity characteristics has been incorporated.

  15. Construction of a multi-dimensional Lagrangian hydrodynamics model of a plasma using the Lie group theory of point transformations

    NASA Astrophysics Data System (ADS)

    Temple, Brian Allen

    Conservative macroscopic governing equations for a plasma in the magnetohydrodynamic (MHD) approximation were derived from the symmetry properties of the microscopic action integral. Invariance of the action integral under translation and rotation symmetries was verified. Noether's Theorem was used to express the invariance equations as two sets of microscopic equations that represent the Euler-Lagrange (E-L) equations for the plasma and an equation in divergence form, called a conservation law, that is the first integral of the Euler-Lagrange equations. The specific forms of the conservation laws were determined from the invariance properties admitted by the action integral. Invariance under translations gave the conservation law for the translational momentum balance while invariance under rotations gave the conservation law for the angular momentum balance. The ensemble average of the microscopic equations was taken to give the kinetic representation of the equations. The momentum integrals in the kinetic equation were evaluated to give the fluid representation of the system. The fluid representation was then expressed in the MHD limit to give the one-fluid representation for the plasma. The total derivatives in the conservation laws were evaluated for the kinetic and fluid representations to verify that the expressions are first integrals of the respective E-L equations. The symmetry properties of the conservation laws in auxiliary form were determined to test the system, of equations for mapping properties that may allow the nonlinear conservation laws to be expressed as nonlinear or linear expressions. The results showed that no nonlinear to linear mapping was possible for the governing equations with charge distributions. The quasi-neutral governing equations admitted a scaling group that allows mapping from the source nonlinear equations to nonlinear target equations that contain one less independent variable. The translation conservation laws were used in a two- dimensional computer simulation of the confined eddy problem to demonstrate an application of the equations. Comparison of the results produced by the code using the conservation law governing equations with previous work is limited to qualitative comparison since differences in the numerical input and graphics software make direct quantitative comparison impossible. Qualitative comparison with previous work show the results to be consistent.

  16. Fifteen new discriminant-function-based multi-dimensional robust diagrams for acid rocks and their application to Precambrian rocks

    NASA Astrophysics Data System (ADS)

    Verma, Surendra P.; Pandarinath, Kailasa; Verma, Sanjeet K.; Agrawal, Salil

    2013-05-01

    For the discrimination of four tectonic settings of island arc, continental arc, within-plate (continental rift and ocean island together), and collision, we present three sets of new diagrams obtained from linear discriminant analysis of natural logarithm transformed ratios of major elements, immobile major and trace elements, and immobile trace elements in acid magmas. The use of discordant outlier-free samples prior to linear discriminant analysis had improved the success rates by about 3% on the average. Success rates of these new diagrams were acceptably high (about 69% to 97% for the first set, about 69% to 99% for the second set, and about 60% to 96% for the third set). Testing of these diagrams for acid rock samples (not used for constructing them) from known tectonic settings confirmed their overall good performance. Application of these new diagrams to Precambrian case studies provided the following generally consistent results: a continental arc setting for the Caribou greenstone belt (Canada) at about 3000 Ma, So Francisco craton (Brazil) at about 3085-2983 Ma, Penakacherla greenstone terrane (Dharwar craton, India) at about 2700 Ma, and Adola (Ethiopia) at about 885-765 Ma; a transitional continental arc to collision setting for the Rio Maria terrane (Brazil) at about 2870 Ma and Eastern felsic volcanic terrain (India) at about 2700 Ma; a collision setting for the Kolar suture zone (India) at about 2610 Ma and Korpo area (Finland) at about 1852 Ma; and a within-plate (likely a continental rift) setting for Malani igneous suite (India) at about 745-700 Ma. These applications suggest utility of the new discrimination diagrams for all four tectonic settings. In fact, all three sets of diagrams were shown to be robust against post-emplacement compositional changes caused by analytical errors, element mobility related to low or high temperature alteration, or Fe-oxidation caused by weathering.

  17. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  18. A tree swaying in a turbulent wind: a scaling analysis.

    PubMed

    Odijk, Theo

    2015-01-01

    A tentative scaling theory is presented of a tree swaying in a turbulent wind. It is argued that the turbulence of the air within the crown is in the inertial regime. An eddy causes a dynamic bending response of the branches according to a time criterion. The resulting expression for the penetration depth of the wind yields an exponent which appears to be consistent with that pertaining to the morphology of the tree branches. An energy criterion shows that the dynamics of the branches is basically passive. The possibility of hydrodynamic screening by the leaves is discussed. PMID:25169247

  19. Analysis of World Economic Variables Using Multidimensional Scaling

    PubMed Central

    Machado, J.A. Tenreiro; Mata, Maria Eugénia

    2015-01-01

    Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene. PMID:25811177

  20. Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis

    PubMed Central

    Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.

    2015-01-01

    Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223

  1. Small-Scale Smart Grid Construction and Analysis

    NASA Astrophysics Data System (ADS)

    Surface, Nicholas James

    The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.

  2. Multi-dimensional modelling of electrostatic force distance curve over dielectric surface: Influence of tip geometry and correlation with experiment

    SciTech Connect

    Boularas, A. Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.

    2014-08-28

    Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip–sample distances (from 0 to 600 nm) and for different DC voltages applied to the tip (6 to 20 V) are shown and compared with experimental data in order to validate our approach.

  3. Multi-dimensional modelling of electrostatic force distance curve over dielectric surface: Influence of tip geometry and correlation with experiment

    NASA Astrophysics Data System (ADS)

    Boularas, A.; Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.

    2014-08-01

    Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction OperatorPRO-scheme. First results of the computation of electrostatic force for different tip-sample distances (from 0 to 600 nm) and for different DC voltages applied to the tip (6 to 20 V) are shown and compared with experimental data in order to validate our approach.

  4. Combining Flux Balance and Energy Balance Analysis for Large-Scale Metabolic Network: Biochemical Circuit Theory for Analysis of Large-Scale Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.

  5. Large-scale temporal analysis of computer and information science

    NASA Astrophysics Data System (ADS)

    Soos, Sandor; Kampis, George; Gulyás, László

    2013-09-01

    The main aim of the project reported in this paper was twofold. One of the primary goals was to produce an extensive source of network data for bibliometric analyses of field dynamics in the case of Computer and Information Science. To this end, we rendered the raw material of the DBLP computer and infoscience bibliography into a comprehensive collection of dynamic network data, promptly available for further statistical analysis. The other goal was to demonstrate the value of our data source via its use in mapping Computer and Information Science (CIS). An analysis of the evolution of CIS was performed in terms of collaboration (co-authorship) network dynamics. Dynamic network analysis covered three quarters of the XX. century (76 years, from 1936 to date). Network evolution was described both at the macro- and the mezo level (in terms of community characteristics). Results show that the development of CIS followed what appears to be a universal pattern of growing into a "mature" discipline.

  6. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

  7. Analysis and Management of Large-Scale Activities Based on Interface

    NASA Astrophysics Data System (ADS)

    Yang, Shaofan; Ji, Jingwei; Lu, Ligang; Wang, Zhiyi

    Based on the concepts of system safety engineering, life-cycle and interface that comes from American system safety standard MIL-STD-882E, and apply them to the process of risk analysis and management of large-scale activities. Identify the involved personnel, departments, funds and other contents throughout the life cycle of large-scale activities. Recognize and classify the ultimate risk sources of people, objects and environment of large-scale activities from the perspective of interface. Put forward the accident cause analysis model according to the previous large-scale activities' accidents and combine with the analysis of the risk source interface. Analyze the risks of each interface and summary various types of risks the large-scale activities faced. Come up with the risk management consciousness, policies and regulations, risk control and supervision departments improvement ideas.

  8. A two-scale finite element formulation for the dynamic analysis of heterogeneous materials

    SciTech Connect

    Ionita, Axinte

    2008-01-01

    In the analysis of heterogeneous materials using a two-scale Finite Element Method (FEM) the usual assumption is that the Representative Volume Element (RVE) of the micro-scale is much smaller than the finite element discretization of the macro-scale. However there are situations in which the RVE becomes comparable with, or even bigger than the finite element. These situations are considered in this article from the perspective of a two-scale FEM dynamic analysis. Using the principle of virtual power, new equations for the fluctuating fields are developed in terms of velocities rather than displacements. To allow more flexibility in the analysis, a scaling deformation tensor is introduced together with a procedure for its determination. Numerical examples using the new approach are presented.

  9. Computational solutions to large-scale data management and analysis

    PubMed Central

    Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

    2011-01-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

  10. [Development of the Trait Respect-Related Emotions Scale for late adolescence].

    PubMed

    Muto, Sera

    2016-02-01

    This study developed a scale to measure the respect-related emotional traits (the Trait Respect-Related Emotions Scale) for late adolescence and examined the reliability and validity. In study 1,368 university students completed the items of the Trait Respect-Related Emotions Scale and other scales of theoretically important personality constructs including adult attachment style, the "Big Five," self-esteem, and two types of narcissistic personality. Factor analysis indicated that there are three factors of trait respect-related emotions: (a) trait (prototypical) respect; (b) trait idolatry (worship and adoration); and (c) trait awe. The three traits associated differentially with the daily experience (frequency) of the five basic respect-related emotions (prototypical respect, idolatry, awe, admiration, and wonder), and other constructs. In Study 2, a test-retest correlation of the new scale with 60 university students indicated good reliability. Both studies generally supported the reliability and validity of the new scale. These findings suggest that, at Ieast in late adolescence, there are large individual differences in respect-related emotion experiences and the trait of respect should be considered as multi-dimensional structure. PMID:26964371

  11. Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales

    SciTech Connect

    Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault

    2011-05-01

    Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.

  12. Meta-Analysis of Scale Reliability Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2013-01-01

    A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on

  13. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  14. TIME-MASS SCALING IN SOIL TEXTURE ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data on texture are used in the majority of inferences about soil functioning and use. The model of fractal fragmentation has attracted attention as a possible source of minimum set of parameters to describe observed particle size distributions. Popular techniques of textural analysis employ the rel...

  15. The Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT): Data Analysis and Visualization for Geoscience Data

    SciTech Connect

    Williams, Dean; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Steed, Chad; Krishnan, Harinarayan; Silva, Claudio; Chaudhary, Aashish; Bremer, Peer-Timo; Pugmire, David; Bethel, E. Wes; Childs, Hank; Prabhat, Mr.; Geveci, Berk; Bauer, Andrew; Pletzer, Alexander; Poco, Jorge; Ellqvist, Tommy; Santos, Emanuele; Potter, Gerald; Smith, Brian; Maxwell, Thomas; Kindig, David; Koop, David

    2013-05-01

    To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.

  16. Welfare effects of natural disasters in developing countries: an examination using multi-dimensional socio-economic indicators

    NASA Astrophysics Data System (ADS)

    Mutter, J. C.; Deraniyagala, S.; Mara, V.; Marinova, S.

    2011-12-01

    The study of the socio-economic impacts of natural disasters is still in its infancy. Social scientists have historically regarded natural disasters as exogenous or essentially random perturbations. More recent scholarship treats disaster shocks as endogenous, with pre-existing social, economic and political conditions determining the form and magnitude of disaster impacts. One apparently robust conclusion is that direct economic losses from natural disasters, similar to human losses, are larger (in relative terms) the poorer a country is, yet cross-country regressions show that disasters may accrue economic benefits due to new investments in productive infrastructure, especially if the investment is funded by externally provided capital (Work Bank assistance, private donations, etc) and do not deplete national savings or acquire a debt burden. Some econometric studies also show that the quality of a country's institutions can mitigate the mortality effects of a disaster. The effects on income inequality are such that the poor suffer greater 'asset shocks' and may never recover from a disaster leading to a widening of existing disparities. Natural disasters affect women more adversely than men in terms of life expectancy at birth. On average they kill more women than men or kill women at a younger age than men, and the more so the stronger the disaster. The extent to which women are more likely to die than men or to die at a younger age from the immediate disaster impact or from post-disaster events depends not only on disaster strength itself but also on the socioeconomic status of women in the affected country. Existing research on the economic effects of disasters focus almost exclusively on the impact on economic growth - the growth rate of GDP. GDP however is only a partial indicator of welfare, especially for countries that are in the lower ranks of development status. Very poor communities are typically involved in subsistence level activities or in the informal economy and will not register disaster set backs in GDP accounts. The alterations to their lives can include loss of livelihood, loss of key assets such as livestock, loss of property and loss of savings, reduced life expectancy among survivors, increased poverty rates, increased inequality, greater subsequent maternal and child mortality (due to destruction of health care facilities), reduced education attainment (lack of school buildings), increased gender-based violence and psychological ailments. Our study enhances this literature in two ways. Firstly, it examines the effects of disasters on human development and poverty using cross-country econometric analysis with indicators of welfare that go beyond GDP. We aim to search the impact of disasters on human development and absolute poverty. Secondly we use Peak Ground Acceleration for earthquakes, a modified Palmer Drought Severity and Hurricane Energy rather than disaster event occurrence to account for the severity of the disaster.

  17. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  18. A Philosophical Item Analysis of the Right-Wing Authoritarianism Scale.

    ERIC Educational Resources Information Center

    Eigenberger, Marty

    Items of Altemeyer's 1986 version of the "Right-Wing Authoritarianism Scale" (RWA Scale) were analyzed as philosophical propositions in an effort to establish each item's suggestive connotation and denotation. The guiding principle of the analysis was the way in which the statements reflected authoritarianism's defining characteristics of

  19. Exploratory Mokken Scale Analysis as a Dimensionality Assessment Tool: Why Scalability Does Not Imply Unidimensionality

    ERIC Educational Resources Information Center

    Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R.

    2012-01-01

    The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various

  20. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to